PreprintPDF Available

Drifting perceptual patterns suggest prediction errors fusion rather than hypothesis selection: replicating the rubber-hand illusion on a robot

Authors:

Abstract and Figures

Humans can experience fake body parts as theirs just by simple visuo-tactile synchronous stimulation. This body-illusion is accompanied by a drift in the perception of the real limb towards the fake limb, suggesting an update of body estimation resulting from stimulation. This work compares body limb drifting patterns of human participants, in a rubber hand illusion experiment, with the end-effector estimation displacement of a multisensory robotic arm enabled with predictive processing perception. Results show similar drifting patterns in both human and robot experiments, and they also suggest that the perceptual drift is due to prediction error fusion, rather than hypothesis selection. We present body inference through prediction error minimization as one single process that unites predictive coding and causal inference and that it is responsible for the effects in perception when we are subjected to intermodal sensory perturbations.
Content may be subject to copyright.
Drifting perceptual patterns suggest prediction
errors fusion rather than hypothesis selection:
replicating the rubber-hand illusion on a robot
Nina-Alisa Hinz, Pablo Lanillos‡∗, Hermann Mueller, Gordon Cheng
Experimental Neuro-Cognitive Psychology, Ludwig-Maximilians-Universit¨
at, M¨
unchen, Germany
Institute for Cognitive Systems, Technische Universit¨
at M¨
unchen, M¨
unchen, Germany
Email: p.lanillos@tum.de
Abstract—Humans can experience fake body parts as theirs
just by simple visuo-tactile synchronous stimulation. This body-
illusion is accompanied by a drift in the perception of the
real limb towards the fake limb, suggesting an update of body
estimation resulting from stimulation. This work compares body
limb drifting patterns of human participants, in a rubber hand
illusion experiment, with the end-effector estimation displace-
ment of a multisensory robotic arm enabled with predictive
processing perception. Results show similar drifting patterns in
both human and robot experiments, and they also suggest that
the perceptual drift is due to prediction error fusion, rather
than hypothesis selection. We present body inference through
prediction error minimization as one single process that unites
predictive coding and causal inference and that it is responsible
for the effects in perception when we are subjected to intermodal
sensory perturbations.
Index Terms—Sensorimotor self, rubber-hand illusion, predic-
tive coding, robotics
I. INTRODUCTION
Distinguishing between our own body and that of others
is fundamental for our understanding of the self. By learning
the relationship between sensory and motor information and
integrating them into a common percept, we gradually develop
predictors about our body and its interaction with the world
[1]. This body learning is assumed to be one of the major
processes underlying embodiment. Body-ownership illusions,
like the rubber hand illusion [2], are the most widely used
methodology to reveal information about the underlying mech-
anisms, helping us to understand how the sensorimotor self
is computed. Empirical evidence has shown that embodiment
is flexible, adaptable, driven by bottom-up and top-down
modulations and sensitive to illusions.
We replicated the passive rubber hand illusion on a multi-
sensory robot and compared it with human participants, there-
fore gaining insight into the perceptive contribution to self-
computation. Enabling a robot with human-like self-perception
[3] is important for: i) improving the machine adaptability
This work has been supported by SELFCEPTION project
(www.selfception.eu) European Union Horizon 2020 Programme (MSCA-
IF-2016) under grant agreement n. 741941 and the ENB Master Program in
Neuro-Cognitive Psychology at Ludwig-Maximilians Universit¨
at. Video to
this paper: http://web.ics.ei.tum.de/~pablo/rubberICDL2018PL.mp4
Accepted for publication at 2018 IEEE International Conference on Devel-
opment and Learning and Epigenetic Robotics
and providing safe human-robot interaction, and ii) testing
computational models for embodiment and obtaining some
clues about the real mechanisms. Although some computa-
tional models have already been proposed for body-ownership,
agency and body-illusions, the majority of them are restricted
to the conceptual level or simplistic simulation [4]. Examining
real robot data and using body illusions as a benchmark
for testing the underlying mechanism enriches the evaluation
considerably. To the best of our knowledge, this is the first
study replicating the rubber hand illusion on an artificial agent
and comparing it to human data.
We showed that when inferring the robot body location
through prediction error minimization [5], the robot limb
drifting patterns are similar to those observed in human par-
ticipants. Human and robot data suggest that the perception of
the real hand and the rubber hand location drifts to a common
location between both hands. This supports the idea that,
instead of selecting one of two hypotheses (common cause for
stimulation vs. different causes) [6], visual and proprioceptive
information sources are merged generating an effect similar to
averaging both hypotheses [7].
The remainder of this paper is as follows: Sec. II describes
current rubber-hand illusion findings and its neural basis; Sec.
III defines the computational model designed for the robot;
Sec. IV describes the experimental setup for both human
participants and the robot; Sec. Vpresents the comparative
analysis of the drifting patterns; Finally, Sec. VI discusses
body estimation within the prediction error paradigm as the
potential cause of the perceptual displacement.
II. BAC KG RO UN D
A. Rubber-hand illusion
Botvinick and Cohen [10] demonstrated that humans can
embody a rubber hand only by means of synchronous visuo-
tactile stimulation of the rubber hand and their hidden real
hand. This was measured using a questionnaire about the
illusion, but also by proprioceptive localization of the partici-
pant’s real hand. After experiencing the illusion, the perception
of their own hand’s position had drifted towards that of the
rubber hand. Since then, multiple studies have replicated the
illusion under different conditions (for a review, see [11]).
Collectively, these studies show that top-down expectations
(a) Illusion depending on distance (b) Proprio. drift depending on time
Fig. 1. (a) Quality of the illusion as a function of the distance between hands:
the higher the better. (b) Intensity of the proprioceptive drift depending on
the visuo-tactile stimulation time. Adapted from [8] and [9] respectively.
about the physical appearance of a human hand, resulting
from abstract internal body models, and bottom-up sensory
information, especially spatiotemporal congruence of the stim-
ulation and distance between the hands, influence embodiment
of the fake hand [9]. In [12], they were even able to induce
body-ownership on a robotic arm. The common assumption
is that the spatial representations of both hands are merged,
as a result of minimizing the error between predicted sensory
outcomes from seeing the stimulation of the rubber hand and
the actual sensory outcome of feeling the stimulation of one’s
own hand [13]. Recently, [7] undermined this by showing that
not only the perception of the real hand’s position is drifted
towards the rubber hand (proprioceptive drift), but the one
of the rubber hand is drifted towards the real hand as well
(visual drift), i.e. to a common location between both hands.
In [6], they proposed a Bayesian Causal Inference Model for
this prediction error minimization, considering visual, tactile
and proprioceptive information, weighted according to their
precision. In combination with the prior probability of assum-
ing a common cause or different causes for the conflicting
multi-sensory information, the posterior probability of each
hypothesis is computed. A common cause, i.e. ownership of
the rubber hand, is assumed if the posterior probability exceeds
a certain threshold. This binarity of the illusion, however, is
at variance with findings of [9], demonstrating a continuous
proprioceptive drift of the stimulated hand. The proprioceptive
drift was shown to increase exponentially during the first
minute of stimulation and increasing further over the following
four minutes (Fig. 1(b)). Although the reported onset of
the illusion ranged from 11 seconds [14] to 23 seconds,
with 90 percent of subjects experiencing it within the first
minute of stimulation [15], the ongoing drifting suggests a
continuous, rather than a discrete mechanism, being involved
in embodiment.
The proposed computational models in the literature of
body-ownership illusions need further verification from exper-
imental data. Several studies showed reduced illusion scores
for larger, as compared to smaller, distances between the real
and the rubber hand [8], [16]–[19] (Fig. 1(a)), though only
few studies measured the proprioceptive drift in dependence
on the distance between the two hands [16], [17]. While [16]
found an increased proprioceptive drift for a larger distance,
the relative amount of drift (i.e. corrected for distance) did
not differ between the small and large distances. In [17], they
replicated this result, as long as the fake hand was near the
body. If the real hand was closer to body midline than the
fake hand, increasing distance between both hands decreased
the proprioceptive drift.
In the present study, we systematically varied the distance
between both hands. The real hand, however stayed in the
same position for all conditions and only the fake hand had a
varying distance from the real hand in anatomically plausible
positions. In [8], where the distance between both hands
was varied by displacing the fake hand in relation to the
real hand, the fake hand was also increasingly rotated with
increasing distance. Rotational differences, nevertheless, may
influence the illusion [20], probably confounding the results
of [8]. In the current study, we systematically examined the
influence of the distance between the rubber and the real hand
on proprioceptive and visual drift. This provided the basis
for validating the computational model proposed in Sec. III
and comparing the drift of the body estimation in different
distances between the robot and humans.
B. Body illusions in the brain
A seminal contribution to possible neuronal mechanisms
underlying the rubber hand illusion came from [21], who
discovered parietal neurons in the primate brain coding for
the position of the real arm and a plosturally plausible fake
monkey arm. Several fMRI studies looked into the neural
correlates for body-ownership illusions in humans (see [20] for
a review). Three areas were consistently found activated during
the rubber hand illusion: posterior parietal cortex (including
intra-parietal cortex and temporo-parietal junction), premotor
cortex and lateral cerebellum. The cerebellum is assumed
to compute the temporal relationship between visual and
tactile signals, thus playing a role in the integration of visual,
tactile and proprioceptive body-related signals [22], [23]. The
premotor and intra-parietal cortex are multisensory areas, also
integrating visual, tactile and proprioceptive signals present
during the rubber hand illusion [24]. In [20], they differentiated
the role of posterior parietal cortex, being responsible for the
recalibration of visual and tactile coordinate systems into a
common reference frame, and the role of premotor cortex,
being responsible for the integration of signals in this common,
hand-centered reference frame. Although it is known that these
areas participate in evoking the rubber hand illusion, little is
known about the underlying computations [25]. In [26], they
used dynamic causal modeling during the rubber hand illusion
to confirm to some extent that, during the illusion, visual in-
formation is weighted more than proprioceptive information -
which would be congruent with predictive coding models. Dur-
ing the illusion the intrinsic connectivity of the somatosensory
cortex was reduced, indicative of less somatosensory precision,
while the extrinsic connectivity between the lateral occipital
complex and premotor cortex was increased, indicative of
more attention to visual input. Further functional evidence for
the proposed computations is needed.
III. COMPUTATIONAL MODEL
Fig. 2. Rubber hand illusion modelled as a body estimation problem solved
using prediction error minimization. Visual features of the rubber hand are
incorporated when there is synchronous visuo-tactile stimulation, though it is
constrained by the prior belief and the expected location of the hand according
to the generative visual forward model and the estimated body configuration.
We formalized the rubber hand illusion as a body estimation
problem under the predictive processing framework. The core
idea behind this is that all features and sensory modalities
are contributing to refine body estimation through the min-
imization of the errors between sensations and predictions
[27]. During synchronous visuo-tactile stimulation, the most
plausible body configuration is perturbed due to the merging of
visual and proprioceptive information. This is coherent with
the drift of both the real hand and the rubber hand as the
participants are just pointing to the prediction of their hand
according to the current body configuration. Figure 2shows
how sensory modalities or features are contributing to the
estimation of the participants limb.
We define xas the latent space variable that expresses
the body configuration. We model the problem as inferring
the most plausible body configuration ˆxgiven the sensation
likelihood and the prior: Px|s) = p(s|ˆx)px). We further
define sp,sv,svt as the proprioceptive, visual and visuo-tactile
sensation respectively. Assuming independence of the different
sources of information we get:
Px|s) = p(sp|x)p(sv|x)p(svt|x)p(x)(1)
The perception or estimation of the body is then solved
by learning an approximation of the forward model for each
feature or modality s=g(x)and minimizing a lower bound
on the KL-divergence known as negative free energy F [5],
[28].
log Px|s) = F=X
i
log p(si|x) + log p(x)(2)
We obtain the estimated value of the latent variables through
gradient descent minimization ˆx=∂F
x:
ˆx=X
i
(sigix))
σsi
| {z }
error expected sensation
g0
ix)sxµx
σx
| {z }
error prior
(3)
We assume that all sensations / features follow a Gaussian dis-
tribution with (linear or non-linear) mean gi(x)and variance
σsi. The forward models learned should be differentiable with
respect to the body configuration (g0
ix) = ∂gi(ˆx)/∂x).
By rewriting the prediction error as e=sg(x)and
defining µxas the prior belief about the body configuration,
the dynamics of the body perception model are described
by (see Appendix for derivation and [27] for the detailed
algorithm):
˙x=ex+ep+evg0
vx) + evtg0
vt( ˆx)
˙ex=sxµxσxex
˙ep=spˆxσpep
˙ev=svgv( ˆx)σvev
˙ev t =svt gvt (ˆx)σvtevt
˙µx=µx+λex(4)
where λis the learning ratio parameter that specifies how
fast the prior of the body configuration µxis adjusted to the
prediction error.
The visual forward function gvand its derivative are cal-
culated using Gaussian process estimation (see Sec. IV). The
visuo-tactile generative function is computed by means of a
hand-crafted likelihood, which uses the visual ovand tactile
ststimulation information (temporal hsand spatial ht), and
the expected position of the hand gvx):
gtx) = st·hs·ht=sta1eb1P(gvx)ov)2·a2eb2δ2
(5)
where a1, b1, a2, b2are parameters that shape the likelihood
of the spatial plausibility and have been tuned in accordance
with the data acquired from human participants; δis the level
of synchrony of the events (e.g. timing difference between the
visual and the tactile event); and ovis the other agent end-
effector location in the visual field.
IV. EXP ERI ME NTAL SET-UP
A. Participants selection
20 volunteers (mean age: 25, 75 % female) took part in the
experiment. They received 8 euros per hour in compensation.
All participants were right-handed, had no disability of per-
ceiving touch on their right hand, did not wear nail polish and
did not have any special visual features (e.g. scars / tattoos)
on their right hand. They had no neurological or psychological
disorders, as indicated by self-report, and normal or corrected-
to-normal vision. None of them had experienced the rubber
hand illusion before. All participants gave informed consent
prior to the experiment.
(a) Human (b) Robot
Fig. 3. Experimental setups used in the current study. (a) Rubber-hand illusion in different distance conditions and (b) artificial version.
B. Humans experiment details
We performed the rubber-hand illusion experiment, focusing
on the proprioceptive drift of the real and the visual drift of the
rubber hand, as a function of the distance between both hands.
The experiment, depicted in Fig. 3, comprised six conditions,
each with a different distance between the real hand and the
rubber hand. The participant’s real right hand was placed in
a box, with the index finger 20cm away from the participants
body midline. The rubber hand was placed with its index
finger 15cm, 20cm, 25cm, 30cm, 35cm or 40cm away from the
participants real right hand (5cm, 0cm, -5cm, -10cm, -15cm or
-20cm away from the participants body midline respectively).
Participants sat in front of a wooden box, placing their hands
near the outer sides of the box. They wore a rain cape covering
their body and arms. In one of the arms, a rubber hand was
placed such that it seemed coherent with the body. With their
left hand they held a computer mouse. Each trial consisted
of four phases: localization of the real hand, localization of
the rubber hand, the stimulation phase and post-stimulation
localization of both hands.
1) First, we covered the box with a wooden top and
a blanket above it, so that no visual cues could be
used. Participants had to indicate where they currently
perceived the location of the index finger of their right
hand, pointing with the mouse on a horizontal line
presented on the screen. The line did cover the whole
length of the box.
2) After fixating the rubber hand for one minute, we again
covered the box and the same task was repeated for the
rubber hand.
3) The box was remodeled, removing the cover and intro-
ducing a vertical board next to the participant’s right
hand so that it was not visible to the participant (Fig.
3(a)). Then the experimenter began stimulating the rub-
ber and the real hand synchronously with two similar
paintbrushes, starting at the index finger, continuing to
the little finger and then starting at the index finger again,
with one brush stroke each about two seconds.
4) Subsequently, participants were again asked to indicate
where they perceived the index finger of the real or the
rubber hand, starting with the real or the rubber hand
in randomized fashion. The box was covered during the
localization.
5) At the end of each trial, participants were asked to
answer ten questions related to the illusion adapted from
[29], presented randomized on the screen, using a con-
tinuous scale from -100 (indicating strong disagreement)
to 100 (indicating strong agreement).
For the localization trials, a horizontal line was presented on
the screen opposite to the box, with the screen having the
same size as the box. The localization trials were repeated ten
times to account for high variance. The proprioceptive drift
and visual drift were calculated by subtracting the average of
the first localization phase from the second localization phase
for the real hand and the rubber hand separately. The illusion
index was calculated by subtracting the average response
to control statements S4-S10 from the average response to
illusion statements S1-S3 [30]. Between all phases participants
were blindfolded, so they did not observe the remodeling,
which might potentially have impeded the illusion.
C. Robot experiment details
We tested the model on the multisensory UR-5 arm of
robot TOMM [31], as depicted in Fig. 3(b). The proprioceptive
input data were three joint angles with added noise (shoulder1,
shoulder2and elbow - Fig. 4(a)). The visual input was an rgb
camera mounted on the head of the robot, with 640×480 pixels
definition. The tactile input was generated by multimodal skin
cells distributed over the arm [32].
D. Learning g(x)from visual and proprioceptive data
In order to learn the sensory forward model, we applied
Gaussian process (GP) regression: gv(x)∼ GP . We pro-
grammed random trajectories in the joint space that resembled
horizontal displacements of the arm. Figure 4(a) shows the
extracted data: noisy joint angles and visual location of the
end-effector, obtained by color segmentation. To learn the
visual forward model sv=gv(x), each sample was defined as
the input joint angles sensor values x= (x1, x2, x3)and the
output sv= (i, j)pixel coordinates. As an example, Figure
4(b) shows the learned visual forward model by GP regression
with 46 samples (red dots). It describes the horizontal mean
and variance (in pixels) with respect to two joints angles. The
GP learning and its partial derivative computation with respect
to xis described in the Appendix B.
(a) Data recorded example (joints angles + noise, end-effector visual,
end-effector cartesian) and schematic picture of the 3-DOF.
(b) Learnt gv(x)for visual horizontal location
(c) Tactile (left) and visual (right) event trajectories
Fig. 4. Collected data. (a) Joints angles, visual and ground truth information of
the end-effector. (b) Mean and the variance computed by the GP for the visual
horizontal location depending on two joints. (c) Touch patterns extracted from
tactile (117 forearm skin cells) and visual sources.
E. Extracting visuo-tactile data
We used proximity sensing information (infrared sensors)
from 117 different skin cells to discern when the arm was
being touched. The sensor outputed a filtered signal (0,1).
From the other’s hand visual trajectory and the skin proximity
activation, we computed the level of synchrony between the
two patterns (Fig. 4(c)). Timings for tactile stimuli stwere
obtained by setting a threshold over the proximity value: prox
>0.7activation. Timings for the other’s trajectory events
were obtained through the velocity components. Detected
initial and end positions of the visual touching are depicted in
Fig. 4(c) (right, green circles).
V. RE SU LTS
We compared the drifting data extracted from the rubber-
hand illusion experiment in human participants and the robot.
In order to obtain the robot results, we fixed in advance the
model parameters for the learning and the body estimation
stages. gv(x)∼ GP learning hyperparameters: signal variance
σn= exp(0.02) and kernel length scale l= exp(0.1). The
integration step was t= 0.05 (20Hz) and the error variances
were σx∈ R3= [1,1,1],σp∈ R3= [1,1,1],σvt ∈ Z2=
[2,2]. The adaptability rate of µxwas λ= 1. The visual feature
from the real hand svwas not used in the rubber hand illusion
experiment as the arm was covered. Finally, the visuo-tactile
function (Eq. 5) parameters were: b1=σt
d2
max , where σt=
0.001 and dmax = 0.0016;b2= 25; and a1=a2= 1.
The robot drift was computed by subtracting the estimated
end-effector position gvx)and the ground truth location, and
ˆxwas dynamically updated minimizing the prediction error
using the proposed model.
A. Comparative analysis
(a) Drift (b) Relative drift
Fig. 5. Human proprioceptive drift vs end-effector robot estimation drift
after the rubber hand illusion experiment. (a) Drift in cm. Positive values
express displacement towards the fake hand. (b) Relative drift depending on
the distance between fake and real arm.
Figure 5shows the proprioceptive drift comparison. Fig.
5(a) shows similar drifting patterns in both the robot and the
human participants. A drift towards the fake hand emerges in
both cases when the distance is small and then vanishes with
longer distances. The prior information used for the tactile
likelihood function parameters is modulated when the effect
is taking place, as the error will start propagating when the
gradient of the function is noticeable. Furthermore, the relative
drift (Fig. 5(b)) also showed that, for close distances, the
amount of displacement is the same, and then it decreases
until vanishing. The robot was tested on even closer distances
than humans, since the human experimental setup was not
equipped for distances beneath 15cm. The large increase in
proprioceptive drift for 10cm distance between fake and real
hand is an interesting prediction for human data, that could be
tested in future work.
B. Human data analysis
Data exceeding a range of two standard deviations around
the mean was excluded from further analysis. T-tests were
used to test the proprioceptive drift, the visual drift and the
illusion score in each condition against zero. In the first three
conditions the proprioceptive drift was significantly different
from zero, while it was not in the other three conditions
(Table I). Employing Bonferroni-Correction only leaves a
trend towards significance in the 20cm distance condition.
However, the average of the first three conditions is still
significantly different from zero M: 13.72,SD : 14.84,
p < .001) while the average of the other conditions is not
(M8.24 :,SD : 19.79,p > .05). The visual drift was
only significantly different from zero in the 30cm distance
condition (M: 14.01,SD : 29.72,p<.01). The illusion
score was significantly different from zero in all conditions
(all p<.05). Partial Pearson correlations between illusion
score and proprioceptive drift, illusion score and visual drift
and between proprioceptive drift and visual drift were not
significant (all p>.05).
TABLE I
DESCRIPTIVE AND INFERENCE STATISTICS FROM PROPRIOCEPTIVE DRIFT
DATA IN EAC H CO ND IT IO N.
Condition mean std df t-value p-value
15 cm 14.89 mm 27.08 mm 19 2.46 .024
20 cm 12.79 mm 18.99 mm 17 2.86 .011
25 cm 13.22 mm 23.51 mm 16 2.32 .034
30 cm 4.39 mm 15.52 mm 16 1.17 .260
35 cm 7.84 mm 18.00 mm 18 1.90 .074
40 cm 5.45 mm 31.34 mm 17 0.74 .471
(a) propriceptive drift (b) illusion score
Fig. 6. Mean and standard deviation of (a) the visual and the proprioceptive
drift, and (b) the illusion score depending on the distance.
C. Robot model analysis
We analyzed the internal variables of the proposed model
during the visuo-tactile stimulation and the induced end-
effector estimation drift towards the fake arm. Figure 7(a)
shows the robot camera view with the final end-effector
estimation overlaid after 12 seconds. Depending on the dif-
ferent enabled modalities (proprioceptive, visuo-tactile and
proprioceptive+visuo-tactile), body estimation evolved differ-
entially, accordingly the prediction of the end-effector gv(x).
Fig. 7(b) shows the evolution of the body configuration in
term of joint angles and the corresponding prediction errors.
We did initialize the robot belief in a wrong body configuration
to further show the adaptability of the model. During the
first five seconds, the system converged to the real body
configuration. Afterwards, when perturbing with synchronous
visuo-tactile stimulation, a bias appeared on the body joints.
This implies a drift of the robot end-effector towards the
location of the fake arm visual feature. Tactile perturbations
are shown as prediction error bumps (yellow line). Fig. 7(b),
top plot, also shows how smooth body configuration output
µx1:3 is (blue line). The robot inferred the most plausible
body joints angles given the sensory information, which in this
case yielded a horizontal drift on the estimated end-effector
location. A video of the evolution of the variables during
the artificial rubber-hand illusion experiment can be found at
http://web.ics.ei.tum.de/pablo/rubberICDL2018PL.mp4.
(a) End-effector drift depending on the sensing information (propio.,
visuo-tactile, and proprio+visuo-tactile)
(b) Evolution of body estimation and predictive errors
Fig. 7. Replicating the rubber-hand illusion on a robot. (a) End-effector
estimation depending on the modality used. (b top) Joints angles in radians:
real (black dotted line), estimated µx(blue dotted line) and current belief
ˆx(red line). (b middle) Errors between expected and observed values. (b
bottom) g(x)values evolution during the experiment. In the first five seconds,
in which there is no tactile stimulation, the estimation is refined. Next, we
inject tactile stimulation while the experimenter is touching a fake arm. When
visuo-tactile stimulation becomes synchronous, a horizontal drift appears on
the end-effector estimation.
VI. DISCUSSION: B ODY ES TI MATIO N AS A N E XP LA NATI ON
FOR THE PERCEPTUAL DRIFT
It has been shown that during the rubber hand illusion, the
location of the real hand is perceived to be closer to the rubber
hand than before. Similarly, the location of the rubber hand is
mislocalized towards the real hand [7]. Our results from the
robot and humans support the former finding: in our predictive
coding scheme, the representation of both hands merged into
a common location between both hands, due to inferring
one’s body’s location from minimizing free energy. This body
estimation generated a drift of the perceived location of the
real hand towards the equilibrium location, which was visible
in the data from both the humans and the robot. In comparative
analysis, the patterns of the drift resembled each other, both in
terms of absolute and relative values. The three closest tested
distances showed a substantial proprioceptive drift. All other
distances showed a smaller drift, approaching zero. For these,
the distance between the fake and the real hand was probably
too large for the fake hand to be fully embodied, supporting
[6] simulation data exhibiting a reduced illusion probability
for distances over 30cm. Although previous and the present
research support predictive coding as a probable underlying
mechanism of the rubber hand illusion, other accounts can
not be ruled out by the present work.
Human illusion score data, however, did not mirror the
proprioceptive shift pattern found. For all distances, illusion
scores ranged between 20 and 35, which on our continuous
scale up to 100 resembles illusion scores previously found
from 1 on a discrete scale to 3, e.g. [8]. Given this, we
can assume that we were able to induce the illusion in
every condition. Illusion scores and the proprioceptive drift,
additionally, were not correlated. This supports the current
debate that body-ownership illusions and the drift are two
different, but related, processes [30]. The proprioceptive drift
is an unconscious process - in contrast to the illusion, which
is consciously accessible to the subjects. Hence, it might be
possible that the predictive coding formulation in its uncon-
scious form can explain drifting patterns, while it is not as
such sufficient to explain body-ownership illusions.
In contrast to [7], we did not find a conclusive visual
drift in the human experiment. From participants’ personal
communication, we know that many used visual landmarks to
estimate the position of the rubber hand, but of the real hand
also. Differences in this strategy for localization would not
only account for the high variance we observed in the drift
data, but also for the small magnitude of the proprioceptive
drift - as compared to the values reported in other studies
(e.g. [16]). Beyond that, the method we used for localization
is probably prone to high variance due to small mouse move-
ments. Although we tried to account for that by repeating
the localization ten times, more trials might be needed as
performed in [6]. Arguably, however, the lack of visual drift
in our study does not contradict the predictive coding scheme.
Actually, some of our participants communicated that they
experienced that the representation of both hands merged
together. This is supported by the positive mean (14.16) of
the actual control statement S10 “It felt as if the rubber hand
and my own right hand lay closer to each other”, which
was even higher than the mean response (-4.95) to the actual
illusion statement S3 “I felt as if the rubber hand were my
hand”. Further investigations, accounting for the variance in
localization, are needed to support this conjecture.
The computational model presented here also generates
predictions about the temporal dynamics of the rubber hand
illusion. The constant accumulation of information resulting
in an also accumulated drift of the body estimation (see 7(a))
is comparable to findings from [9] (see 1), who also found an
accumulation of the drift over time in humans. To provide a
finer temporal comparison, the dynamics of the human illusion
should be further investigated.
VII. CONCLUSION
We implemented the rubber hand illusion experiment on a
multisensory robot. The perception of the real hand’s position
drifted towards the rubber hand, following a similar pattern
in humans and the robot. We suggest that this proprioceptive
drift resulted from a merging body estimation between both
hands. This supports an account of the proprioceptive drift
underlying body-ownership illusions in terms of the predictive
coding scheme. Future work will address the mechanisms
behind awareness of body-ownership.
APPENDIX A
FREE ENERGY GRADIENT
px|sp, sv) = p(sp|ˆx)p(sv|ˆx)p(svt|ˆx)p(ˆx)(6)
Applying logarithms we get the negative free energy formu-
lation:
F= ln p(sp|ˆx) + ln p(sv|ˆx) + ln p(svt |ˆx) + ln p(ˆx)(7)
Substituting the probability distributions by their functions
f(.;.), and under the Laplace approximation [33] and assum-
ing normally distributed noise, we can compute the negative
free energy as:
F= ln f(sp;gp(x), σp) + ln f(sp;gp(x), σp) + ln f(x;µx, σx)
=
xµx)2
2σx
+
(spgpx))2
2σp
(svgvx))2
2σv
(svt gvt x))2
2σvt
+1
2ln σxln σsp
ln σsv
ln σsvt +C. (8)
In order to find ˆxin a gradient-descent scheme we minimize
Eq. 8through the following differential equation:
˙x=
ˆxµx
σx
+
+spgpx)
σp
g0
px) + svgvx)
σv
g0
vx) + svt gvt x)
σvt
g0
vt( ˆx)
(9)
In the case that ˆxis equivalent to gp(x)like using the joint
angles values directly as the body configuration, then the
proprioceptive error can be rewritten as: spˆxand the gradient
becomes 1. Generalizing for isensors we finally have:
∂F
ˆx=( ˆxµx)
σx
+X
i
∂gi(ˆx)T
ˆx
sigix)
σi
(10)
APPENDIX B
GP REGRESSION
Given sensor samples sfrom the robot in several body
configurations xand the covariance function k(xi, xj), the
training is performed by computing the covariance matrix
K(X, X )on the collected data with noise σ2
n:
kij =σ2
nI+k(xi, xj)|∀i, j x(11)
The prediction of the sensory outcome sgiven xis then
computed as [34]:
gx) = kx, X)K(X, X )1s=kx, X )α(12)
where α=choleski(K)T\(choleski(K)\s).
Finally, in order to compute the gradient of the posterior
g(x)0we differentiate the kernel [35], and obtain its prediction
analogously as Eq. 12:
gx)0=∂kx, X)
ˆxK(X, X)1s=∂kx, X)
ˆxα(13)
Using the squared exponential kernel with the Mahalanobis
distance covariance function, the derivative becomes:
gx)0=Λ1xX)T(kx, X)T·α)(14)
where Λis a matrix where the diagonal is populated with the
length scale for each dimension (diag(1/l2)) and ·is element-
wise multiplication.
REFERENCES
[1] P. Lanillos, E. Dean-Leon, and G. Cheng, “Enactive self: a study
of engineering perspectives to obtain the sensorimotor self through
enaction,” in IEEE Int. Conf. on Developmental Learning and Epigenetic
Robotics, 2017.
[2] M. Botvinick and J. Cohen, “Rubber hands feeltouch that eyes see,”
Nature, vol. 391, no. 6669, p. 756, 1998.
[3] P. Lanillos, E. Dean-Leon, and G. Cheng, “Yielding self-perception in
robots through sensorimotor contingencies,” IEEE Trans. on Cognitive
and Developmental Systems, no. 99, pp. 1–1, 2016.
[4] K. Kilteni, A. Maselli, K. P. Kording, and M. Slater, “Over my fake
body: body ownership illusions for studying the multisensory basis of
own-body perception,Frontiers in human neuroscience, vol. 9, p. 141,
2015.
[5] K. Friston, “A theory of cortical responses,Philosophical Transactions
of the Royal Society of London B: Biological Sciences, vol. 360, no.
1456, pp. 815–836, 2005.
[6] M. Samad, A. J. Chung, and L. Shams, “Perception of body ownership
is driven by bayesian sensory inference,PloS one, vol. 10, no. 2, p.
e0117178, 2015.
[7] R. Erro, A. Marotta, M. Tinazzi, E. Frera, and M. Fiorio, “Judging the
position of the artificial hand induces a visual drift towards the real one
during the rubber hand illusion,” Scientific reports, vol. 8, no. 1, p. 2531,
2018.
[8] D. M. Lloyd, “Spatial limits on referred touch to an alien limb may
reflect boundaries of visuo-tactile peripersonal space surrounding the
hand,” Brain and cognition, vol. 64, no. 1, pp. 104–109, 2007.
[9] M. Tsakiris and P. Haggard, “The rubber hand illusion revisited:
Visuotactile integration and self-attribution,Journal of experimental
psychology. Human perception and performance, vol. 31, no. 1, pp.
80–91, 2005.
[10] M. Botvinick and J. Cohen, “Rubber hands ‘feel’ touch that eyes see,”
Nature, vol. 391, no. 6669, p. 756, 1998.
[11] K. Kilteni, A. Maselli, K. P. Kording, and M. Slater, “Over my fake
body: Body ownership illusions for studying the multisensory basis of
own-body perception,Frontiers in human neuroscience, vol. 9, 2015.
[12] L. Aymerich-Franch, D. Petit, G. Ganesh, and A. Kheddar, “Non-
human looking robot arms induce illusion of embodiment,” International
Journal of Social Robotics, vol. 9, no. 4, pp. 479–490, 2017.
[13] M. Tsakiris, “My body in the brain: A neurocognitive model of body-
ownership,Neuropsychologia, vol. 48, no. 3, pp. 703–712, 2010.
[14] H. H. Ehrsson, C. Spence, and R. E. Passingham, “That’s my hand!
activity in premotor cortex reflects feeling of ownership of a limb,
Science, vol. 305, no. 5685, pp. 875–877, 2004.
[15] A. Kalckert and H. H. Ehrsson, “The onset time of the ownership
sensation in the moving rubber hand illusion,” Frontiers in psychology,
vol. 8, p. 344, 2017.
[16] R. Zopf, G. Savage, and M. A. Williams, “Crossmodal congruency
measures of lateral distance effects on the rubber hand illusion,” Neu-
ropsychologia, vol. 48, no. 3, pp. 713–725, 2010.
[17] C. Preston, “The role of distance from the body and distance from
the real hand in ownership and disownership during the rubber hand
illusion,” Acta psychologica, vol. 142, no. 2, pp. 177–183, 2013.
[18] S. C. Pritchard, R. Zopf, V. Polito, D. M. Kaplan, and M. A. Williams,
“Non-hierarchical influence of visual form, touch, and position cues
on embodiment, agency, and presence in virtual reality,” Frontiers in
psychology, vol. 7, p. 1649, 2016.
[19] N. Ratcliffe and R. Newport, “The effect of visual, spatial and temporal
manipulations on embodiment and action,” Frontiers in human neuro-
science, vol. 11, p. 227, 2017.
[20] T. R. Makin, N. P. Holmes, and H. H. Ehrsson, “On the other hand:
Dummy hands and peripersonal space,” Behavioural brain research, vol.
191, no. 1, pp. 1–10, 2008.
[21] M. S. Graziano, D. F. Cooke, and C. S. Taylor, “Coding the location of
the arm by sight,” Science, vol. 290, no. 5497, pp. 1782–1786, 2000.
[22] H. H. Ehrsson, N. P. Holmes, and R. E. Passingham, “Touching a
rubber hand: Feeling of body ownership is associated with activity in
multisensory brain areas,” Journal of neuroscience, vol. 25, no. 45, pp.
10 564–10 573, 2005.
[23] A. Guterstam, G. Gentile, and H. H. Ehrsson, “The invisible hand
illusion: Multisensory integration leads to the embodiment of a discrete
volume of empty space,” Journal of cognitive neuroscience, vol. 25,
no. 7, pp. 1078–1099, 2013.
[24] A. Guterstam, H. Zeberg, V. M. ¨
Ozc¸iftci, and H. H. Ehrsson, “The mag-
netic touch illusion: A perceptual correlate of visuo-tactile integration
in peripersonal space,” Cognition, vol. 155, pp. 44–56, 2016.
[25] M. A. J. Apps and M. Tsakiris, “The free-energy self: A predictive
coding account of self-recognition,” Neuroscience and biobehavioral
reviews, vol. 41, pp. 85–97, 2014.
[26] D. Zeller, K. J. Friston, and J. Classen, “Dynamic causal modeling of
touch-evoked potentials in the rubber hand illusion,NeuroImage, vol.
138, pp. 266–273, 2016.
[27] P. Lanillos and G. Cheng, “Adaptive robot body learning and estimation
through predictive coding,arXiv preprint arXiv:1805.03104, 2018.
[28] R. Bogacz, “A tutorial on the free-energy framework for modelling
perception and learning,” Journal of mathematical psychology, 2015.
[29] M. P. M. Kammers, F. de Vignemont, L. Verhagen, and H. C. Dijkerman,
“The rubber hand illusion in action,” Neuropsychologia, vol. 47, no. 1,
pp. 204–211, 2009.
[30] Z. Abdulkarim and H. H. Ehrsson, “No causal link between changes in
hand position sense and feeling of limb ownership in the rubber hand
illusion,” Attention, perception & psychophysics, vol. 78, no. 2, pp. 707–
720, 2016.
[31] E. Dean-Leon, B. Pierce, F. Bergner, P. Mittendorfer, K. Ramirez-
Amaro, W. Burger, and G. Cheng, “Tomm: Tactile omnidirectional
mobile manipulator,” in Robotics and Automation (ICRA), IEEE Int.
Conf. on, 2017, pp. 2441–2447.
[32] P. Mittendorfer and G. Cheng, “Humanoid multimodal tactile-sensing
modules,” IEEE Trans. on robotics, vol. 27, no. 3, pp. 401–410, 2011.
[33] K. Friston, “Hierarchical models in the brain,” PLoS computational
biology, vol. 4, no. 11, p. e1000211, 2008.
[34] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine
Learning (Adaptive Computation and Machine Learning). The MIT
Press, 2005.
[35] A. McHutchon, “Differentiating gaussian processes,” 2013.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
When subjects look at a rubber hand being brush-stroked synchronously with their own hidden hand, they might feel a sense of ownership over the rubber hand. The perceived mislocalization of the own hand towards the rubber hand (proprioceptive drift) would reflect an implicit marker of this illusion occurring through the dominance of vision over proprioception. This account, however, contrasts with principles of multisensory integration whereby percepts result from a "statistical sum" of different sensory afferents. In this case, the most-known proprioceptive drift should be mirrored by complementary visual drift of the rubber hand in the opposite direction. We investigated this issue by designing two experiments in which subjects were not only requested to localize their own hand but also the rubber hand and further explored the subjective feeling of the illusion. In both experiments, we demonstrated a (visual) drift in the opposite direction of the proprioceptive drift, suggesting that both hands converge toward each other. This might suggest that the spatial representations of the two hands are integrated in a common percept placed in between them, contradicting previous accounts of substitution of the real hand by the rubber hand.
Conference Paper
Full-text available
In this paper we discuss the enactive self from a computational point of view and study the suitability of current methods to instantiate it onto robots. As an assumption, we consider any cognitive agent as an autonomous system that constructs its identity by continuous interaction with the environment. We start examining algorithms to learn the body-schema and to enable tool-extension, and we finalise by studying their viability for generalizing the enactive self computational model. This paper points out promising techniques for bodily self-modelling and exploration, as well as formally link sensorimotor models with differential kinematics. Although the study is restricted to basic sensorimotor construction of the self, some of the analysed works also traverse into more complex self constructions with a social component. Furthermore, we discuss the main gaps of current engineering approaches for modelling enactive robots and describe the main characteristics that a synthetic sensorimotor self-model should present.
Article
Full-text available
The feeling of owning and controlling the body relies on the integration and interpretation of sensory input from multiple sources with respect to existing representations of the bodily self. Illusion paradigms involving multisensory manipulations have demonstrated that while the senses of ownership and agency are strongly related, these two components of bodily experience may be dissociable and differentially affected by alterations to sensory input. Importantly, however, much of the current literature has focused on the application of sensory manipulations to external objects or virtual representations of the self that are visually incongruent with the viewer’s own body and which are not part of the existing body representation. The current experiment used MIRAGE-mediated reality to investigate how manipulating the visual, spatial and temporal properties of the participant’s own hand (as opposed to a fake/virtual limb) affected embodiment and action. Participants viewed two representations of their right hand inside a MIRAGE multisensory illusions box with opposing visual (normal or grossly distorted), temporal (synchronous or asynchronous) and spatial (precise real location or false location) manipulations applied to each hand. Subjective experiences of ownership and agency towards each hand were measured alongside an objective measure of perceived hand location using a pointing task. The subjective sense of agency was always anchored to the synchronous hand, regardless of physical appearance and location. Subjective ownership also moved with the synchronous hand, except when both the location and appearance of the synchronous limb were incongruent with that of the real limb. Objective pointing measures displayed a similar pattern, however movement synchrony was not sufficient to drive a complete shift in perceived hand location, indicating a greater reliance on the spatial location of the real hand. The results suggest that while the congruence of self-generated movement is a sufficient driver for the sense of agency, the sense of ownership is additionally sensitive to cues about the visual appearance and spatial location of one’s own body.
Article
Full-text available
We examine whether non-human looking humanoid robot arms can be perceived as part of one’s own body. In two subsequent experiments, participants experienced high levels of embodiment of a robotic arm that had a blue end effector with no fingers (Experiment 1) and of a robotic arm that ended with a gripper (Experiment 2) when it was stroked synchronously with the real arm. Levels of embodiment were significantly higher than the corresponding asynchronous condition and similar to those reported for a human-looking arm. Additionally, we found that visuo-movement synchronization also induced embodiment of the robot arm and that embodiment was even partially maintained when the robot hand was covered with a blue plastic cover. We conclude that humans are able to experience a strong sense of embodiment towards non-human looking robot arms. The results have important implications for the domains related to robotic embodiment. Full text: http://www.readcube.com/articles/10.1007/s12369-017-0397-8?author_access_token=UimvqEoWvMRLTGxS7Mnf2_e4RwlQNchNByi7wbcMAY7CnAQIlEmbn2wM4YsorxS5weL5Qr7NArcBNaLGNJ-iaMenjhVNXo7m1FtJVBDWEu5TgD9E7oQt0QSYO-uZgirMotSm9qJMqdNGhQqYANG54Q%3D%3D
Article
Full-text available
The rubber hand illusion is a perceptual illusion whereby a model hand is perceived as part of one’s own body. This illusion has been extensively studied, but little is known about the temporal evolution of this perceptual phenomenon, i.e., how long it takes until participants start to experience ownership over the model hand. In the present study, we investigated a version of the rubber hand experiment based on finger movements and measured the average onset time in active and passive movement conditions. This comparison enabled us to further explore the possible role of intentions and motor control processes that are only present in the active movement condition. The results from a large group of healthy participants (n=117) showed that the illusion of ownership took approximately 23 seconds to emerge (active: 22.8; passive: 23.2). The 90th percentile occurs in both conditions within approximately 50 seconds (active: 50; passive: 50.6); therefore, most participants experience the illusion within the first minute. We found indirect evidence of a facilitatory effect of active movements compared to passive movements, and we discuss these results in the context of our current understanding of the processes underlying the moving rubber hand illusion.
Article
Full-text available
We address self-perception in robots as the key for world understanding and causality interpretation. We present a self-perception mechanism that enables a humanoid robot to understand certain sensory changes caused by naive actions during interaction with objects. Visual, proprioceptive and tactile cues are combined via artificial attention and probabilistic reasoning to permit the robot to discern between inbody and outbody sources in the scene.With that support and exploiting inter-modal sensory contingencies, the robot can infer simple concepts such as discovering potential "usable" objects. Theoretically and through experimentation with a real humanoid robot, we show how self-perception is a backdrop ability for high order cognitive skills. Moreover, we present a novel model for self-detection, which does not need to track the body parts. Furthermore, results show that the proposed approach successfully discovers objects in the reaching space improving scene understanding by discriminating real objects from visual artefacts.
Article
Full-text available
The concept of self-representation is commonly decomposed into three component constructs (sense of embodiment, sense of agency, and sense of presence), and each is typically investigated separately across different experimental contexts. For example, embodiment has been explored in bodily illusions; agency has been investigated in hypnosis research; and presence has been primarily studied in the context of Virtual Reality (VR) technology. Given that each component involves the integration of multiple cues within and across sensory modalities, they may rely on similar underlying mechanisms. However, the degree to which this may be true remains unclear when they are independently studied. As a first step toward addressing this issue, we manipulated a range of cues relevant to these components of self-representation within a single experimental context. Using consumer-grade Oculus Rift VR technology, and a new implementation of the Virtual Hand Illusion, we systematically manipulated visual form plausibility, visual–tactile synchrony, and visual–proprioceptive spatial offset to explore their influence on self-representation. Our results show that these cues differentially influence embodiment, agency, and presence. We provide evidence that each type of cue can independently and non-hierarchically influence self-representation yet none of these cues strictly constrains or gates the influence of the others. We discuss theoretical implications for understanding self-representation as well as practical implications for VR experiment design, including the suitability of consumer-based VR technology in research settings.
Article
Full-text available
To accurately localize our limbs and guide movements toward external objects, the brain must represent the body and its surrounding (peripersonal) visual space. Specific multisensory neurons encode peripersonal space in the monkey brain, and neurobehavioral studies have suggested the existence of a similar representation in humans. However, because peripersonal space lacks a distinct perceptual correlate, its involvement in spatial and bodily perception remains unclear. Here, we show that applying brushstrokes in mid-air at some distance above a rubber hand-without touching it-in synchrony with brushstrokes applied to a participant's hidden real hand results in the illusory sensation of a "magnetic force" between the brush and the rubber hand, which strongly correlates with the perception of the rubber hand as one's own. In eight experiments, we characterized this "magnetic touch illusion" by using quantitative subjective reports, motion tracking, and behavioral data consisting of pointing errors toward the rubber hand in an intermanual pointing task. We found that the illusion depends on visuo-tactile synchrony and exhibits similarities with the visuo-tactile receptive field properties of peripersonal space neurons, featuring a non-linear decay at 40cm that is independent of gaze direction and follows changes in the rubber hand position. Moreover, the "magnetic force" does not penetrate physical barriers, thus further linking this phenomenon to body-specific visuo-tactile integration processes. These findings provide strong support for the notion that multisensory integration within peripersonal space underlies bodily self-attribution. Furthermore, we propose that the magnetic touch illusion constitutes a perceptual correlate of visuo-tactile integration in peripersonal space.