Article

That's My Hand! Activity in Premotor Cortex Reflects Feeling of Ownership of a Limb

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

When we look at our hands, we immediately know that they are part of our own body. This feeling of ownership of our limbs is a fundamental aspect of self-consciousness. We have studied the neuronal counterparts of this experience. A perceptual illusion was used to manipulate feelings of ownership of a rubber hand presented in front of healthy subjects while brain activity was measured by functional magnetic resonance imaging. The neural activity in the premotor cortex reflected the feeling of ownership of the hand. This suggests that multisensory integration in the premotor cortex provides a mechanism for bodily self-attribution.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The first neuroimaging study of the RHI using functional magnetic resonance imaging (fMRI) by Ehrsson et al. (2004) found increased activity in premotor and intraparietal areas. These areas are known for their multisensory properties Graziano & Botvinick, 2002;Stein & Stanford, 2008) and therefore ideal candidates for the integrative processes underpinning the RHI and body ownership more generally. ...
... In a whole-brain (univariate voxel-wise) analysis, hypothesis testing is conducted simultaneously for each voxel in the brain and then corrected for multiple comparisons. A ubiquitous approach within fMRI research, including among neuroimaging studies of body ownership, is to investigate further certain regions of interest (ROIs) (e.g., Bekrater-Bodmann et al., 2014;Brozzoli et al., 2012;Ehrsson et al., 2004;Ehrsson et al., 2005;Guterstam et al., 2015;Limanowski et al., 2014;Limanowski & Blankenburg, 2016a;Petkova et al., 2011). ROIs can be defined based on anatomy, previous literature or a separate localizer scan and are considered justified only when a strong a priori hypothesis exists (Duncan et al., 2009;Poldrack, 2007). ...
... In addition, one study utilized the enfacement illusion (Tsakiris, 2008), and one utilized the rubber foot illusion (Crea et al., 2015). The illusion conditions were generally compared with a control condition, for example, in the form of asynchronous stroking (Ehrsson et al., 2004), stroking in an incongruent position Limanowski & Blankenburg, 2015), with the arm detached from the body or to the blood-oxygen-level-dependent (BOLD) response before the induction of the illusion (Matsumoto et al., 2020). ...
Article
Full-text available
How do we feel that we own our body? By manipulating the integration of multisensory signals and creating the illusory experience of owning external body‐parts and entire bodies, researchers have investigated the neurofunctional correlates of body ownership. Recent attempts to synthesize the neuroimaging literature of body ownership through meta‐analysis have shown partly inconsistent results. A large proportion of functional magnetic resonance imaging (fMRI) findings on body ownership includes analyses based on regions of interest (ROIs). This approach can produce inflated findings when results are synthesized in meta‐analyses. We conducted a systematic search of the fMRI literature of ownership of body‐parts and entire bodies. Three activation likelihood estimation (ALE) meta‐analyses were conducted, testing the impact of including ROI‐based findings. When both whole‐brain and ROI‐based results were included, frontal and posterior parietal multisensory areas were associated with body ownership. When only ROI‐based results were included, larger areas of the frontal and posterior parietal cortices and the middle occipital gyrus were associated with body ownership. A whole‐brain meta‐analysis, excluding ROI‐based results, found no significant convergence of activation across the brain. These findings highlight the difficulty of quantitatively synthesizing a neuroimaging field where a large part of the literature is based on findings from ROI‐based analyses. We discuss these findings in the light of current practices within this field of research and highlight current problems of meta‐analytic approaches of body ownership. We recommend the sharing of unthresholded data as a means to facilitate future meta‐analyses of the neuroimaging literature of body ownership.
... The previous neuroscience literature has indicated that there is a difference in neural representation between the internal model for body ownership (Collins et al., 2017;Ehrsson et al., 2004Ehrsson et al., , 2005Gentile et al., 2015;Guterstam et al., 2013Guterstam et al., , 2015Guterstam et al., , 2019Limanowski and Blankenburg, 2016) and the internal model for agency (Chambon et al., 2013;David et al., 2008;Farrer et al., 2003;Farrer and Frith, 2002;Nahab et al., 2011;Schnell et al., 2007;Sperduti et al., 2011;Uhlmann et al., 2020;Yomogida et al., 2010). For instance, patients with dystonia with lesions in the cerebellum, one of the crucial neural bases of the internal model, showed abnormalities in agency perception; these abnormalities may have been due to the failure of the cerebellum to detect the mismatch between vision and movement (Delorme et al., 2016). ...
... Such perceptual changes were based on plastic changes in neural activity in the sensorimotor cortices (Della-Maggiore et al., 2004;Vahdat et al., 2011). The neural basis of the body ownership illusion can also be found in sensorimotor areas, including the premotor cortex or primary somatosensory cortex (Collins et al., 2017;Ehrsson et al., 2004). This information suggested that another internal model in the sensorimotor cortices, instead of that in the cerebellum, and its learning processes should determine the speed at which one's sense of body ownership was updated. ...
... In fact, the bimodal neurons in the parietal cortex allow a monkey to immediately incorporate the tool into its body representation (Iriki et al., 1996), indicating that body representation serves as a basis of body ownership and varies often in daily life. This phenomenon is why body ownership is easily transferred in the RHI setup (Botvinick and Cohen, 1998;Ehrsson et al., 2004) or VR avatar (Blanke et al., 2002). In contrast to body ownership, agency ownership requires robustness. ...
Article
Full-text available
Bodily self-consciousness has been considered a sensorimotor root of self-consciousness. If this is the case, how does sensorimotor memory, which is important for the prediction of sensory consequences of volitional actions, influence awareness of bodily self-consciousness? This question is essential for understanding the effective acquisition and recovery of self-consciousness following its impairment, but it has remained unexamined. Here, we investigated how body ownership and agency recovered following body schema distortion in a virtual reality environment along with two kinds of motor memories: memories that were rapidly updated and memories that were gradually updated. We found that although agency and body ownership recovered in parallel, the recovery of body ownership was predicted by fast memories and that of agency was predicted by slow memories. Thus, the bodily self was represented in multiple motor memories with different dynamics. This finding demystifies the controversy about the causal relationship between body ownership and agency.
... Hence, the participant sees the rubber hand being stroked and feels the corresponding tactile stimulation on the out-of-sight real hand. The majority of participants will quickly experience a shift in proprioception so that the rubber hand feels as if it were their own and they sense the touches originating directly from the rubber hand (Longo et al., 2008;Reader et al., 2021) with the illusion occurring for most people within 10-15 s approximately (Ehrsson et al., 2004;Lloyd, 2007). If the stroking and tapping on the real and rubber hand are not synchronous, or more precisely, if the degree of asynchrony is greater than approximately 300 ms (Shimada et al., 2009;Ehrsson and Chancel, 2019), then the illusion is not experienced by the majority of participants. ...
... The level of proprioceptive drift is significantly greater after the synchronous condition compared to after the asynchronous and other control conditions; in addition, typically, the stronger the subjective illusion, the stronger is this difference in proprioceptive drift Abdulkarim and Ehrsson, 2016). Although proprioceptive drift can occur outside the context of the RHI (Holmes et al., 2006) and the subjective illusion cannot be equated with drift (Rohde et al., 2011), the significant differences in proprioceptive drift between the synchronous and asynchronous conditions have been well replicated (Tsakiris and Haggard, 2005;Tsakiris et al., 2010;Abdulkarim and Ehrsson, 2016;Abdulkarim et al., 2021); proprioceptive drift is related to the RHI because visuoproprioceptive combination and recalibration are key elements of the illusion (Ehrsson et al., 2004;Abdulkarim and Ehrsson, 2016;Fuchs et al., 2016). ...
... Finally, the rubber hand illusion is supported by neuroscience. In functional magnetic resonance imaging (fMRI) experiments, the RHI has been associated with increased blood oxygenation level-dependent (BOLD) contrast signals in the premotor cortex and posterior parietal and subcortical regions associated with multisensory integration of body-related sensory signals (Ehrsson et al., 2004;Gentile et al., 2013;Limanowski and Blankenburg, 2016;Grivaz et al., 2017). Moreover, the stronger the activation difference is in the multisensory frontoparietal areas between illusion (synchronous) and control conditions (asynchronous and spatial incongruence), the stronger the illusion as measured with questionnaires (Ehrsson et al., 2004;Ehrsson et al., 2005;Brozzoli et al., 2012;Gentile et al., 2013), proprioceptive drift (Brozzoli et al., 2012), or threat-evoked SCR ; difference scores on these tests correlate with the condition-specific activations. ...
Article
Full-text available
Some recent papers by P. Lush and colleagues have argued that the rubber hand illusion (RHI), where participants can feel a rubber hand as their own under appropriate multisensory stimulation, may be caused mainly by hypnotic suggestibility and expectations (demand characteristics). These papers rely primarily on a study with 353 participants who took part in a RHI experiment carried out in a classical way with brush stroking. Participants experienced a synchronous condition where the rubber hand was seen to be touched in synchrony with touch felt on their corresponding hidden real hand, or the touches were applied asynchronously as a control. Each participant had a related measure of their hypnotisability on a scale known as the Sussex-Waterloo Scale of Hypnotisability (SWASH). The authors found a correlation between the questionnaire ratings of the RHI in the synchronous condition and the SWASH score. From this, they concluded that the RHI is largely driven by suggestibility and further proposed that suggestibility and expectations may even entirely explain the RHI. Here we examine their claims in a series of extensive new analyses of their data. We find that at every level of SWASH, the synchronous stimulation results in greater levels of the illusion than the asynchronous condition; moreover, proprioceptive drift is greater in the synchronous case at every level of SWASH. Thus, while the level of hypnotisability does modestly influence the subjective reports (higher SWASH is associated with somewhat higher illusion ratings), the major difference between the synchronous and asynchronous stimulation is always present. Furthermore, by including in the model the participants’ expectancy ratings of how strongly they initially believed they would experience the RHI in the two conditions, we show that expectations had a very small effect on the illusion ratings; model comparisons further demonstrate that the multisensory condition is two-to-three-times as dominant as the other factors, with hypnotisability contributing modestly and expectations negligibly. Thus, although the results indicate that trait suggestibility may modulate the RHI, presumably through intersubject variations in top-down factors, the findings also suggest that the primary explanation for the RHI is as a multisensory bodily illusion.
... Body ownership -one out of the three components of embodiment together with agency [i.e., the feeling of initiating and being in control of the own actions; (e.g., David et al., 2008;Braun et al., 2018)] and location [i.e., the experienced location of the body in space; (e.g., Blanke, 2012)]is the cognition that a body and/or its parts belong to oneself (Blanke, 2012). Body ownership results from the integration and interpretation of multimodal sensory information in the brain, importantly, visual, somatosensory, and proprioceptive signals (Botvinick and Cohen, 1998;Maravita et al., 2003;Ehrsson, 2004). Neuroimaging studies have shown that body ownership relies on frontal premotor, somatosensory, temporoparietal junction, and insular brain regions (Botvinick and Cohen, 1998;Maravita et al., 2003;Ehrsson, 2004;Tsakiris, 2010). ...
... Body ownership results from the integration and interpretation of multimodal sensory information in the brain, importantly, visual, somatosensory, and proprioceptive signals (Botvinick and Cohen, 1998;Maravita et al., 2003;Ehrsson, 2004). Neuroimaging studies have shown that body ownership relies on frontal premotor, somatosensory, temporoparietal junction, and insular brain regions (Botvinick and Cohen, 1998;Maravita et al., 2003;Ehrsson, 2004;Tsakiris, 2010). ...
... This led participants to experience their own hand to be more stone-like than in the control condition (e.g., they rated their own hand as stiffer, heavier, harder, unnatural, and less sensitive). Various variations of the rubber hand illusion paradigm have shown that body ownership can be experimentally induced in a part of a body or an entire body other than one's own in healthy young (Ehrsson, 2004;Tsakiris and Haggard, 2005;Tsakiris et al., 2006;Lloyd, 2007;Haans et al., 2008;Kammers et al., 2009;van der Hoort et al., 2011;Kalckert and Ehrsson, 2012;Lopez et al., 2012;Pozeg et al., 2014;Crea et al., 2015;Flögel et al., 2016;Wen et al., 2016;Burin et al., 2017;Riemer et al., 2019;Matsumoto et al., 2020), elderly (Burin and Kawashima, 2021), and neurologic patients (Zeller et al., 2011;Lenggenhager et al., 2012;Burin et al., 2015;Wawrzyniak et al., 2018). The demonstrated flexibility of the brain is indeed crucial to preserve a stable body image while the perceptual characteristics of the body constantly vary in everyday life. ...
Article
Full-text available
To offer engaging neurorehabilitation training to neurologic patients, motor tasks are often visualized in virtual reality (VR). Recently introduced head-mounted displays (HMDs) allow to realistically mimic the body of the user from a first-person perspective (i.e., avatar) in a highly immersive VR environment. In this immersive environment, users may embody avatars with different body characteristics. Importantly, body characteristics impact how people perform actions. Therefore, alternating body perceptions using immersive VR may be a powerful tool to promote motor activity in neurologic patients. However, the ability of the brain to adapt motor commands based on a perceived modified reality has not yet been fully explored. To fill this gap, we “tricked the brain” using immersive VR and investigated if multisensory feedback modulating the physical properties of an embodied avatar influences motor brain networks and control. Ten healthy participants were immersed in a virtual environment using an HMD, where they saw an avatar from first-person perspective. We slowly transformed the surface of the avatar (i.e., the “skin material”) from human to stone. We enforced this visual change by repetitively touching the real arm of the participant and the arm of the avatar with a (virtual) hammer, while progressively replacing the sound of the hammer against skin with stone hitting sound via loudspeaker. We applied single-pulse transcranial magnetic simulation (TMS) to evaluate changes in motor cortical excitability associated with the illusion. Further, to investigate if the “stone illusion” affected motor control, participants performed a reaching task with the human and stone avatar. Questionnaires assessed the subjectively reported strength of embodiment and illusion. Our results show that participants experienced the “stone arm illusion.” Particularly, they rated their arm as heavier, colder, stiffer, and more insensitive when immersed with the stone than human avatar, without the illusion affecting their experienced feeling of body ownership. Further, the reported illusion strength was associated with enhanced motor cortical excitability and faster movement initiations, indicating that participants may have physically mirrored and compensated for the embodied body characteristics of the stone avatar. Together, immersive VR has the potential to influence motor brain networks by subtly modifying the perception of reality, opening new perspectives for the motor recovery of patients.
... Si l'auriculaire du participant est stimulé, il faut que ce soit l'auriculaire de la main factice qui le soit également, pas un autre doigt ou une autre partie de la main (Costantini & Haggard, 2007;Limanowski et al., 2014;Riemer et al., 2014). De la même façon, l'illusion, mesurée par une dérive proprioceptive ou un report subjectif de vivacité, ne fonctionne pas si la main en caoutchouc est placée dans une position différente (e.g., tournée à 180°) de celle de la main réelle (Costantini & Haggard, 2007;Ehrsson et al., 2004;Tsakiris & Haggard, 2005) ou si elle est trop éloignée de celle-ci (Kalckert & Ehrsson, 2014b l'apparence de l'objet à incarner va impacter son incarnation. L'illusion de la main en caoutchouc classique n'a pas lieu si l'objet censé représenter la main, n'a pas l'apparence d'une main humaine (Hohwy & Paton, 2010;Tsakiris, 2010). ...
... De même, si la latéralité de la main factice diffère de celle de la main stimulée du participant, l'illusion n'a pas lieu (Tsakiris et al., 2007). Il faut également que la posture de la main factice soit plausible d'un point de vue anatomique (Ehrsson et al., 2004;Kalckert & Ehrsson, 2012;Kilteni et al., 2015;Tsakiris, 2010). La texture (Haans et al., 2008), la taille (Armel & Ramachandran, 2003) représentation réaliste. ...
... Les deux types de stimuli ne proviennent donc pas du même endroit. De nombreuses études se sont penchées sur l'impact de la distance, qu'elle soit sur le plan vertical ou horizontal, entre la main factice et la main réelle (Ehrsson et al., 2004;Haans et al., 2008;IJsselsteijn et al., 2006;Kalckert & Ehrsson, 2014b;Zopf et al., 2010). Par exemple, Zopf et collaborateurs (2010) ont montré que l'illusion de la main en caoutchouc pouvait toujours avoir lieu et avec la même intensité, que la main factice soit placée à 15 ou à 45 cm de la main réelle, sur le plan horizontal. ...
Thesis
La kinesthésie est la perception consciente des mouvements des différentes parties de son propre corps dans l’espace. Elle résulte de l’intégration de multiples signaux sensoriels tels que les signaux visuels, proprioceptifs ou tactiles. L’intégration multisensorielle dépendrait de trois types de congruence : les congruences temporelle, spatiale et sémantique. Pour que l’intégration soit optimale, les différents signaux sensoriels devraient survenir en même temps, au même endroit et être associés sémantiquement. L’objectif principal de cette thèse visait à étudier les mécanismes d’intégration sensorielle en jeu dans la kinesthésie en utilisant des signaux artificiels. Pour cela, nous avons étudié la mesure dans laquelle des signaux sensoriels artificiels pouvaient prendre part à la kinesthésie en fonction de leur degré d’incongruence avec les signaux naturels, générant ainsi des situations ne pouvant être obtenues avec des signaux naturels.Nous avons adapté le paradigme miroir à la réalité virtuelle, remplaçant les signaux visuels naturels (i.e., le reflet du bras dans le miroir) par des signaux artificiels (i.e., les bras d’un avatar). Cette implémentation du paradigme miroir en réalité virtuelle nous a permis de manipuler différents degrés d’incongruence sémantique (dissimilarité morphologique entre avatar et corps réel) ou spatiale (perspective dans laquelle est vu l’avatar) entre stimuli visuels provenant de l’avatar et stimuli non visuels (notamment proprioceptifs) provenant du corps du participant. Dans leur ensemble, nos résultats ont fait apparaitre que l’incongruence sémantique ou spatiale n’empêchait pas la contribution (et donc l’intégration) au percept kinesthésique des informations visuelles provenant de l’avatar, même lorsque le niveau d’incongruence était important (e.g., bras de l’avatar représentés par trois points ; perspective à la troisième personne). Cependant, cette contribution se réduisait quand le niveau d’incongruence augmentait, l’information visuelle ayant donc d’autant moins de poids dans le percept kinesthésique (multisensoriel) que l’incongruence augmente (Articles 1-3 ).Dans ce travail nous avons également exploré l’hypothèse selon laquelle seuls des signaux visuels issus du corps même du participant ou de tout objet incarné, pouvaient être pris en compte à des fins kinesthésiques. Cette hypothèse est partiellement validée par une analyse transversale des résultats des cinq expériences des Articles 2, 3 et 4, faisant apparaitre un lien positif entre le niveau d’incarnation de l’avatar et l’intensité des illusions kinesthésiques telle qu’évaluée par des mesures subjectives de vitesse et de durée des illusions. Toutefois, l'étude dédiée (Article 4), visant à manipuler expérimentalement le niveau d’incarnation d’un avatar, n’a pas permis d’apporter la preuve d’un tel lien.Enfin, nous avons aussi testé si un stimulus auditif généré par sonification des mouvements pouvait contribuer à la kinesthésie, en l’absence de vision. Dans cette étude (Article 5), les informations auditives préalablement associées aux mouvements n’ont pas été efficaces pour générer des illusions de mouvement.Dans leur ensemble, les résultats obtenus ont mis en évidence la contribution de stimuli visuels artificiels à la kinesthésie. De plus, ils indiquent que cette contribution varie en fonction du degré de congruence sémantique et spatiale entre les stimuli artificiels et les stimuli naturels.
... Behavioral or physiological evidence is derived from measures like proprioceptive drift, skin conductance, time of onset or duration, and temperature (e.g. Ehrsson et al. 2004;deHaan et al. 2017;Lane et al. 2017;Yeh et al. 2017;Critchley et al. 2021). The multisensory integration hypothesis suggests that neural activity that mediates the experience seems to involve the premotor cortex, the intraparietal sulcus, the anterior insula, and the sensorimotor cortex (Ehrsson et al. 2004;Kammers et al. 2009;Petkova et al. 2011;Bekrater-Bodmann et al. 2014;della Gatta et al. 2016;Limanowski and Blankenburg 2016;Lira et al. 2018;Peviani et al. 2018). ...
... Ehrsson et al. 2004;deHaan et al. 2017;Lane et al. 2017;Yeh et al. 2017;Critchley et al. 2021). The multisensory integration hypothesis suggests that neural activity that mediates the experience seems to involve the premotor cortex, the intraparietal sulcus, the anterior insula, and the sensorimotor cortex (Ehrsson et al. 2004;Kammers et al. 2009;Petkova et al. 2011;Bekrater-Bodmann et al. 2014;della Gatta et al. 2016;Limanowski and Blankenburg 2016;Lira et al. 2018;Peviani et al. 2018). It has not yet been possible, however, to identify a sufficient set of functionally specific neural correlates; in fact, even when employing the higher time resolution of EEG, results have been inconsistent (for a summary of the relevant studies, see Rao and Kayser 2017). ...
... All participants underwent the RHI task procedure; each time the stroking continued for 1 min. The duration of stroking was 1 min, both because of our findings from stage 1 and because other investigations of the RHI have found that it can occur in less than 12 s (Ehrsson et al. 2004;Tsakiris and Haggard 2005;Arzy et al. 2006;Tsakiris et al. 2007). Moreover, because the standard RHI induction task in stage 1 enabled us to distinguish between 2 groups-those who are susceptible and those who are not-we did not use the standard control condition: that is, only synchronous stroking was employed, 5 times for each participant. ...
Article
Full-text available
Susceptibility to the rubber hand illusion (RHI) varies. To date, however, there is no consensus explanation of this variability. Previous studies, focused on the role of multisensory integration, have searched for neural correlates of the illusion. But those studies have failed to identify a sufficient set of functionally specific neural correlates. Because some evidence suggests that frontal α power is one means of tracking neural instantiations of self, we hypothesized that the higher the frontal α power during eyes-closed resting state, the more stable the self. As a corollary, we infer that the more stable the self, the less susceptible are participants to a blurring of boundaries—to feeling that the rubber hand belongs to them. Indeed, we found that frontal α amplitude oscillations negatively correlate with susceptibility. Moreover, since lower frequencies often modulate higher frequencies, we explored the possibility that this might be the case for the RHI. Indeed, some evidence suggests that high frontal α power observed in low-RHI participants is modulated by δ frequency oscillations. We conclude that while neural correlates of multisensory integration might be necessary for the RHI, sufficient explanation involves variable intrinsic neural activity that modulates how the brain responds to incompatible sensory stimuli.
... 3) The sense of self-location refers to the volume of space where one feels located (Kilteni et al., 2012). Usually, self-location and body-space coincide until one feels selflocated inside a physical body (Lenggenhager et al., 2009; out-of-body experiences can be an exception (Ehrsson et al., 2004)). Operators should be aware of the remote environment and their position in it. ...
... This perceptual cue is usually affected by time delay in teleoperation or by system lag in VR environments. There is an extensive literature of studies on the effect of synchronous and asynchronous strokes (Aymerich-Franch et al., 2017;Ehrsson et al., 2004;Folegatti et al., 2009;Hogendoorn et al., 2009;Longo et al., 2008;) in a RHI setup or manipulating virtual or robotic limbs. The common finding is that asynchronous stimulation decreases the sense of ownership over the surrogate. ...
... On the basis of the application context, the surrogate can represent the entire body or just a part of it. Rubber hands (Botvinick & Cohen, 1998;Ehrsson et al., 2004) and mannequins (Carey et al., 2019) are options just in empirical studies that focus on a better understanding of the embodiment components and their relation. VR avatars, instead, can be used for both theoretical studies (Debarba et al., 2017;Slater et al., 2010) and for testing the operator's SoE in a commercial teleoperation system (e.g., VR games). ...
... L'importance des facteurs liés au corps, autres que la distance spatiale entre la main réelle et la main en caoutchouc, a été mise en évidence notamment par Tsakiris et Haggard (2005), qui ont systématiquement examiné l'influence du positionnement et des caractéristiques de l'objet utilisé pour induire l'illusion. Les résultats montrent que le sentiment illusoire d'appartenance de la main lors de l'illusion ne se produit que lorsque l'objet stimulé est une main en caoutchouc de même latéralité et située dans la même position anatomique que la main du participant (Ehrsson et al., 2004 ;Pavani et al., 2000). Aucun sentiment d'appartenance ne semble induit pour une main en caoutchouc placée à un angle de 90 degrés par rapport à la main du participant ou lorsqu'une main en caoutchouc de latéralité opposée à la main du participant est présentée. ...
... Ehrsson et ses collaborateurs (Ehrsson et al., 2004 ;Ehrsson, 2020) ii. ...
... Or, en cognition spatiale, une distinction fondamentale est faite entre la perspective à la première personne (1PP) et la perspective à la troisième personne (3PP) Vogeley & Fink, 2003), liées respectivement aux cadres de référence égocentriques et allocentriques (Burgess, 2006). Un référentiel égocentrique est un système de coordonnées centré sur le corps, et est considéré comme important pour les fonctions liées à la perception et à la réalisation d'actions (Fogassi et al., 1992 ;Graziano & Gross, 1997 (Ehrsson et al., 2004 ;Makin et al., 2008 ;Petkova & Ehrsson, 2008 ;Pektova et al., 2011). ...
Thesis
Cette thèse s’inscrit dans les développements récents de la théorie de l’incorporation appliqués à la cognition sociale qui défendent que la plasticité de la représentation du Soi induite par interaction multisensorielle peut s’étendre à notre relation à l’autre, et modifier notre ressenti envers autrui. L’utilisation d’une procédure inspirée des neurosciences pour téléporter des sujets humains dans des robots a permis d’induire, par interaction sensorimotrice, l’incorporation du sujet. Nous avons mis en évidence que la nature de la manipulation, des mouvements de tête sujet-robot synchrones ou des caresses simultanées des deux visages, module la plasticité de la représentation du Soi corporel évaluée à travers les modifications des sensations illusoires d’appartenance incluant l’appropriation du visage, de localisation et d’agentivité. Le mouvement volontaire induit une singularité via un fort sentiment d’agentivité alors que l’induction tactile produit une perception illusoire plus distribuée. Cette même modulation est aussi observée lors d’une perspective d’interaction homme-robot en face-à-face. Cette incorporation peut augmenter l’acceptabilité et la proximité sociale vis-à-vis du robot mais celle-ci ne suffit pas : elle dépend de la nature de la manipulation sensorielle. Uniquement les manipulations motrices renforcent la sympathie et l’affinité du sujet envers le robot, les deux étant corrélées positivement au sentiment d’agentivité. Ces résultats suggèrent des mécanismes dissociés sous-jacents à l’incorporation en termes d’agentivité. Ainsi, la résonance intentionnelle lors de mouvements simultanés entre l’homme et le robot pourrait être responsable de sentiments accrus de proximité sociale et émotionnelle envers le robot. Remarquablement, la sensation d’incorporation et les sentiments sociaux induits associés sont indépendants de l’apparence humanoïde ou non du robot.
... At the same time, participants may experience the illusion of perceiving the rubber hand as part of their body. Functional neuroimaging studies using the RHI have revealed a brain network 10,11 , comprising the body-selective extrastriate body area (EBA), posterior parietal cortex (PPC), and ventral premotor cortex (PMv), which is thought to integrate sensory information in order to recalibrate peripersonal space [12][13][14] , to support action 15,16 . The RHI paradigm has also been used to investigate how emotion processing, related to threat, interacts with the illusionary self-attribution of the fake hand. ...
... Indeed, previous studies investigating the RHI typically applied stimulation for longer period of time (i.e., 30-35 s), and participants reported the start of the illusion 6 to 10 s after beginning stimulation 48,49 . In that context, Ehrsson et al. 10 found that the PMv activity was associated with the after-onset period of the RHI (i.e., approx. 11 s after the start of the stroking). ...
Article
Full-text available
Body perception has been extensively investigated, with one particular focus being the integration of vision and touch within a neuronal body representation. Previous studies have implicated a distributed network comprising the extrastriate body area (EBA), posterior parietal cortex (PPC) and ventral premotor cortex (PMv) during illusory self-attribution of a rubber hand. Here, we set up an fMRI paradigm in virtual reality (VR) to study whether and how the self-attribution of (artificial) body parts is altered if these body parts are somehow threatened. Participants ( N = 30) saw a spider (aversive stimulus) or a toy-car (neutral stimulus) moving along a 3D-rendered virtual forearm positioned like their real forearm, while tactile stimulation was applied on the real arm in the same (congruent) or opposite (incongruent) direction. We found that the PPC was more activated during congruent stimulation; higher visual areas and the anterior insula (aIns) showed increased activation during aversive stimulus presentation; and the amygdala was more strongly activated for aversive stimuli when there was stronger multisensory integration of body-related information (interaction of aversiveness and congruency). Together, these findings suggest an enhanced processing of aversive stimuli within the amygdala when they represent a bodily threat.
... In the paradigm, watching a fake rubber hand being stroked by a paintbrush in synchrony and in the same direction with one's own concealed hand creates the feeling that the rubber hand is one's own, although the participant is indeed aware that the rubber hand is not theirs. This illusion occurred about 11 s after stroking the hand (Ehrsson et al., 2004). In addition, the perceived position of the real hand drifts toward the rubber hand (proprioceptive drift; Botvinick & Cohen, 1998;Costantini & Haggard, 2007;Tsakiris et al., 2010;Tsakiris & Haggard, 2005). ...
... In addition, the perceived position of the real hand drifts toward the rubber hand (proprioceptive drift; Botvinick & Cohen, 1998;Costantini & Haggard, 2007;Tsakiris et al., 2010;Tsakiris & Haggard, 2005). Proprioceptive drift does not occur when the rubber hand is stroked asynchronously (Armel & Ramachandran, 2003;Ehrsson et al., 2004;Tsakiris & Haggard, 2005) or in a different direction to the real hand (Costantini & Haggard, 2007). When a nonbody part object is used (e.g., a wooden stick or block), or when the rubber hand posture is incongruent with the posture of the real hand, the RHI does not occur (Guterstam et al., 2013;Tsakiris & Haggard, 2005). ...
Article
Full-text available
Badminton players have a plastic modification of their arm representation in the brain due to the prolonged use of their racket. However, it is not known whether their arm representation can be altered through short-term visuotactile integration. The neural representation of the body is easily altered when multiple sensory signals are integrated in the brain. One of the most popular experimental paradigms for investigating this phenomenon is the “rubber hand illusion.” This study was designed to investigate the effect of prolonged use of a racket on the modulation of arm representation during the rubber hand illusion in badminton players. When badminton players hold the racket, their badminton experience in years is negatively correlated with the magnitude of the rubber hand illusion. This finding suggests that tool embodiment obtained by the prolonged use of the badminton racket is less likely to be disturbed when holding the racket.
... Beyond these core components, ongoing research further highlights the influence of top-down factors, thus departing from a wholly bottom-up account (Dempsey-Jones & Kritikos, 2014;Kilteni et al., 2015). For example, inconsistencies between the position of the rubber hand and internal representations of the real limb's actual posture impair the RHI, thus showing that prior information shapes the emergence of this phenomenon (Ehrsson et al., 2004). Likewise, handedness, anatomy, texture, incorporeability, affect, and awareness of internal body signals can also modulate the illusion to some extent (Dempsey-Jones & Kritikos, 2017Kilteni et al., 2015;Tsakiris, 2010Tsakiris, , 2017. ...
... We speculate that these modulations reflect the putative central role of visual processing in bodily self-consciousness (Deroy et al., 2016;Faivre et al., 2015). 6 Prevailing views argue that the combination of visual and tactile inputs overwrites prior proprioceptive knowledge, thereby altering body representations during the RHI (Botvinick & Cohen, 1998;Ehrsson et al., 2004;Hagura et al., 2007;Pavani et al., 2000). The current work expands this viewpoint by showing how focusing attention to visual inputs heightens feelings of embodiment in the context of the RHI compared to when individuals instead focus on somatosensations. ...
Article
Full-text available
The Rubber Hand Illusion (RHI) creates distortions of body ownership through multimodal integration of somatosensory and visual inputs. This illusion largely rests on bottom-up (automatic multisensory and perceptual integration) mechanisms. However, the relative contribution from top-down factors, such as controlled processes involving attentional regulation, remains unclear. Following previous work that highlights the putative influence of higher-order cognition in the RHI, we aimed to further examine how modulations of working memory load and task instructions—two conditions engaging top-down cognitive processes—influence the experience of the RHI, as indexed by a number of psychometric dimensions. Relying on exploratory factor analysis for assessing this phenomenology within the RHI, our results confirm the influence of higher-order, top-down mental processes. Whereas task instruction strongly modulated embodiment of the rubber hand, cognitive load altered the affective dimension of the RHI. Our findings corroborate that top-down processes shape the phenomenology of the RHI and herald new ways to improve experimental control over the RHI.
... Top-down factors can likewise influence embodiment by limiting the scope of integration into one's own body. Thus, factors like shape (e.g., Tsakiris et al., 2010) and orientation (e.g., Ehrsson et al., 2004) of the rubber hand are important constraints. This indicates that the embodied entity should be plausibly connected to the body, although some degrees of flexibility can be observed (e.g., Kilteni et al., 2012b). ...
... Plausible implied bodies (c.f., Tsakiris, 2010) are integrated into the body representation (Implied body 1 and 2). In this step, the implied body is, for example, assessed with respect to its shape (e.g., Tsakiris et al., 2010), orientation (e.g., Ehrsson et al., 2004), and distance (e.g., Kalckert and Ehrsson, 2014b) from the existing body representation. Embodiment of an entity arises from integrating implied bodies into the body representation, whereas presence relates to the spatial reference point of this body representation. ...
Article
Full-text available
When interacting with objects in the environment, it feels natural to have a body which moves in accordance to our intentions. Virtual reality (VR) provides a tool to present users with an alternative virtual body and environment. In VR, humans embody the presented virtual body and feel present in the virtual environment. Thus, embodiment and presence frequently co-occur and share some communalities. Nevertheless, both processes have been hardly considered together. Here, we review the current literature on embodiment and presence and present a new conceptual framework, the Implied Body Framework (IBF), which unifies both processes into one single construct. The IBF can be used to generate new hypotheses to further improve the theoretical conceptualisation of embodiment and presence and thus, facilitate its transfer into application.
... and which looks at the world with its own specific perspective (the first-person perspective, (Brozzoli et al., 2012a;Ehrsson et al., 2004Ehrsson et al., , 2005Makin et al., 2007). Furthermore, as already described, the neurons of these areas are sensitive to the spatial-temporal coherence of within-PPS stimulations , and it is this capability of detecting the synchrony of multimodal stimuli that determine the onset of the RHI (Costantini & Haggard, 2007). ...
... Participants reported a greater sense of ownership towards the virtual arm in case of congruent visual-tactile stimulation, correlated to an enhanced activity of the posterior parietal cortex (area 7a, in SPL). This result confirms what has been observed in the literature on rubber hand illusion, whose sense of illusory ownership seems to be linked to the activity of a brain network that includes two crucial nodes of the neural representation of the PPS (the posterior parietal cortex and the premotor cortex) plus the extrastriate body area (EBA), located in the middle temporal gyrus (Ehrsson et al., 2004(Ehrsson et al., , 2005Limanowski & Blankenburg, 2015). The most interesting aspect is that Fourcade and colleagues report enhanced activity of the amygdala in response to threatening stimulation, as would be expected from the core structure of the fear network, but this response was modulated by the strength of the illusion of ownership, with increased activity in case of congruent threatening visuo-tactile stimulation. ...
Thesis
Full-text available
Forty years have passed since the coining of the term "peripersonal space" (PPS), that region of space in which our daily life takes place, in which we can interact with the objects and people around us. The first studies of the electrophysiological literature of this spatial representation have observed in specific regions of the macaque’s brain the existence of multisensory neurons capable of encoding tactile, visual and / or auditory stimuli according to their distance from specific parts of the body. These bi- or trimodal neurons, indeed, show tactile receptive fields centered on a specific part of the body, such as the face or hand, and visual and / or auditory receptive fields overlapping spatially with the formers. In this way, the same neurons are able to respond to tactile, visual and auditory stimulations delivered on or close to a specific body-part. Furthermore, these multisensory receptive fields are "anchored" to each other: the movement of the monkey's hand involves a coherent displacement not only of the tactile receptive fields, but also of the visual ones. This body-part centered reference frame of the coding of multisensory stimuli within PPS allows to keep the information relating to the position of the different parts of the body and surrounding objects always updated, with the aim of planning and implementing effective actions. Neurophysiological and behavioral studies on patients suffering from extinction and neglect following brain lesions of the right hemisphere have allowed to highlight, even in humans, the existence and modularity of the PPS. Subsequent neuroimaging studies have brought support to this evidence, highlighting a network of fronto-parietal and subcortical regions capable of coding multi-modal stimulations according to their distance from the body. The functions of this spatial representation are manifold: mediate the relationship between the perception of external stimuli and the execution of goal-directed actions, monitoring the space around the body in order to identify potential threats and implement defensive reactions, organize and manage the space between us and others in case of different types of social interaction and allow us to identify ourselves with our body, giving it a localization in space. However, despite the great scientific interest that this region of space has elicited over the past forty years, a direct comparison of its neural underpinnings in non-human primates and humans is still missing. For this reason, in the first chapter of this doctoral dissertation we will report the results of an fMRI study, conducted on human and macaque participants, which investigated the neural response patterns to stimulations close to or far from different body-parts, minimizing the differences among the experimental protocols used in the two species. For the first time PPS is tested in two different species but with the same experimental protocol, highlighting similarities and differences between the human and simian PPS circuit but also between the response patterns associated with the stimulation of different bodily districts. Starting from the second chapter we will instead focus our interest only on human participants, to try to shed light on a defining problem that has overlapped the concept of PPS representation to that of a second spatial representation: the arm reaching space (ARS). The latter, considered as the space around the body that we can reach by extending our arm, over time has often been used as a synonym for the PPS representation, leading to define PPS as ARS or to test the two spatial representations with the same experimental protocols. However, the different neural bases and the different characteristics of the encoding of stimuli within these two regions of space suggest their distinction. In chapter II, to this purpose, we will present a series of five behavioral experiments that investigated the differences and similarities between PPS and ARS .. [etc]
... Recent research has linked bodily self-consciousness to the processing and integration of multisensory bodily signals [40][41][42]. Indeed, activation of multisensory brain areas and multimodal neurons has been found in participants reporting abnormal perception of their body [43][44][45]. Further, experimental support for this claim comes from the so-called rubber hand illusion (RHI) [46]. These classical studies demonstrated that when healthy participants watch an artificial hand being stroked in synchrony with stroking on their own hidden hand (placed behind a barrier), they report to feel touch coming from the artificial hand and the artificial hand to be their own hand [43,46,47]. ...
... Further, experimental support for this claim comes from the so-called rubber hand illusion (RHI) [46]. These classical studies demonstrated that when healthy participants watch an artificial hand being stroked in synchrony with stroking on their own hidden hand (placed behind a barrier), they report to feel touch coming from the artificial hand and the artificial hand to be their own hand [43,46,47]. These changes in tactile perception and hand ownership are often accompanied by a drift in the perceived position of one's own hand toward the artificial hand (i.e. ...
Article
Full-text available
Purpose of review The goal of the review is to highlight the growing importance of multisensory integration processes connected to bionic limbs and somatosensory feedback restoration. Recent findings Restoring quasi-realistic sensations by means of neurostimulation has been shown to provide functional and motor benefits in limb amputees. In the recent past, cognitive processes linked to the artificial sense of touch seemed to play a crucial role for a full prosthesis integration and acceptance. Summary Artificial sensory feedback implemented in bionic limbs enhances the cognitive integration of the prosthetic device in amputees. The multisensory experience can be measured and must be considered in the design of novel somatosensory neural prostheses where the goal is to provide a realistic sensory experience to the prosthetic user. The correct integration of these sensory signals will guarantee higher-level cognitive benefits as a better prosthesis embodiment and a reduction of perceived limb distortions.
... Thus, it is difficult to achieve embodiment when stimulus presented to artificial body parts (mainly visual feedback) is not consistent with sensory feedback (e.g., tactile, haptic, and proprioceptive) perceived through the natural body. It is also known that the sense of body ownership is restricted by spatial and anatomical consistency [17,34,97]. ...
... Our results indicate that this finer spatial resolution of the somatosensory mapping (which could be short-lived) can benefit visually-based extrinsic body representations. The interdependence between somatosensory-based and visually-based body representations is also revealed in the so-called rubber hand illusion [21]. This illusion arises from the simultaneous brushing of the hand of the subject, which is hidden from view, and of a facsimile of a human hand viewed in front of the subject. ...
Preprint
Full-text available
Previous studies have shown that the sensory modality used to identify the position of proprioceptive targets hidden from sight, but frequently viewed, influences the type of the body representation employed for reaching them with the finger. The question then arises as to whether this observation also applies to proprioceptive targets which are hidden from sight, and rarely, if ever, viewed. We used an established technique for pinpointing the type of body representation used for the spatial encoding of targets which consisted of assessing the effect of peripheral gaze fixation on the pointing accuracy. More precisely, an exteroceptive, visually dependent, body representation is thought to be used if gaze deviation induces a deviation of the pointing movement. Three light-emitting diodes (LEDs) were positioned at the participants' eye level at -25 deg, 0 deg and +25 deg with respect to the cyclopean eye. Without moving the head, the participant fixated the lit LED before the experimenter indicated one of the three target head positions: topmost point of the head (vertex) and two other points located at the front and back of the head. These targets were either verbal-cued or tactile-cued. The goal of the subjects (n=27) was to reach the target with their index finger. We analysed the accuracy of the movements directed to the topmost point of the head, which is a well-defined, yet out of view anatomical point. Based on the possibility of the brain to create visual representations of the body areas that remain out of view, we hypothesized that the position of the vertex is encoded using an exteroceptive body representation, both when verbally or tactile-cued. Results revealed that the pointing errors were biased in the opposite direction of gaze fixation for both verbal-cued and tactile-cued targets, suggesting the use of a vision-dependent exteroceptive body representation. The enhancement of the visual body representations by sensorimotor processes was suggested by the greater pointing accuracy when the vertex was identified by tactile stimulation compared to verbal instruction. Moreover, we found in a control condition that participants were more accurate in indicating the position of their own vertex than the vertex of other people. This result supports the idea that sensorimotor experiences increase the spatial resolution of the exteroceptive body representation. Together, our results suggest that the position of rarely viewed body parts are spatially encoded by an exteroceptive body representation and that non-visual sensorimotor processes are involved in the constructing of this representation.
... In the stimulus phase, the brain might raise its integrity for the working memory (Fz) and motor function (Cz) instead of separating the inconsistent visual information (Oz). This relation makes sense if we consider that the motor-perception with the subject's short term memory (including multisensory integration at the ventral premotor cortex [Ehrsson et al., 2004, Gentile et al., 2015) attempts to counterbalance the mismatch in the visual system. The body integrity (Res and EDA) is also observed in the stimulus phase. ...
Preprint
Full-text available
A bstract Human body awareness is malleable and adaptive to changing contexts. The illusory sense of body-ownership has been studied since the publication of the rubber hand illusion, where ambiguous body ownership feeling, expressed as “the dummy hand is my hand even though that is not true”, was first defined. Phenomenologically, the ambiguous body ownership is attributed to a conflict between feeling and judgement; in other words, it characterises a discrepancy between first-person (i.e. bottom-up) and third-person (i.e. top-down) processes. Although Bayesian inference can explain this malleability of body image sufficiently, the theory does not provide a good illustration of why we have different experiences to the same stimuli – the difficulty lies in the uncertainty regarding the concept of judgement in their theory. This study attempts to explain subjective experience during rubber hand illusions using integrated information theory (IIT). The integrated information Φ in IIT measures the difference between the entire system and its subsystems. This concept agrees with the phenomenological interpretation – that is, there is conflict between judgement and feeling. By analysing the seven nodes of a small body– brain system, we demonstrate that the integrity of the entire system during the illusion decreases with increasing integrity of its subsystems. These general tendencies agree well with many brain-image analyses and subjective reports; furthermore, we found that subjective ratings were associated with the Φs. Our result suggests that IIT can explain the general tendency of the sense of ownership illusions and individual differences in subjective experience during the illusions.
... Neuroimaging studies suggest that embodiment of an artificial body part is linked to activation of different frontal and posterior areas (Ehrsson et al., 2004). The premotor area (PM) seems to have a prominent role, suggested by the positive correlation between its activation and the strength on the illusory FO and the changes in connectivity dynamics with parietal areas (Kanayama et al., 2017). ...
Article
When we look at our body parts, we are immediately aware that they belong to us and we rarely doubt about the integrity, continuity and sense of ownership of our body. Despite this certainty, immersive virtual reality (IVR) may lead to a strong feeling of embodiment over an artificial body part seen from a first-person perspective. Although such feeling of ownership (FO) has been described in different situations, it is not yet understood how this phenomenon is generated at neural level. To track the real-time brain dynamics associated with FO, we delivered transcranial magnetic stimuli over the hand region in the primary motor cortex and simultaneously recorded electroencephalography in 19 healthy volunteers (11M/8F) watching IVR renderings of anatomically plausible (full-limb) vs implausible (hand disconnected from the forearm) virtual limbs. Our data show that embodying a virtual hand is temporally associated with a rapid drop of cortical activity of the onlookers' hand region in the primary motor cortex contralateral to the observed hand. Spatiotemporal analysis shows that embodying the avatar's hand is also associated with fast changes of activity within an interconnected fronto-parietal circuit ipsilateral to the brain stimulation. Specifically, an immediate reduction of connectivity with the premotor area is paralleled by an enhancement in the connectivity with the posterior parietal cortex which is related to the strength of ownership illusion ratings and thus likely reflects conscious feelings of embodiment. Our results suggest that changes of bodily representations are underpinned by a dynamic cross-talk within a highly-plastic, fronto-parietal network.SIGNIFICANCE STATEMENTObserving an avatar's body part from a first-person perspective, induces an illusory embodiment over it. What remains unknown are the cortical dynamics underpinning the embodiment of artificial agents. To shed light on the physiological mechanisms of embodiment we used a novel approach that combines non-invasive stimulation of the cortical motor-hand area and whole-scalp electroencephalographic recordings in people observing an embodied artificial limb. We found that just before the illusion started, there is a decrease of activity of the motor-hand area accompanied by an increase of connectivity with the parietal region ipsilateral to the stimulation that reflects the ratings of the embodiment illusion. Our results suggest that changes of bodily representations are underpinned by a dynamic cross-talk within a fronto-parietal circuit.
... Alternatively, we suggest that it can represent the physiological counterpart of an embodiment phenomenon, related to the sense of body-ownership (Ehrsson et al., 2004). In recent years, the increasing interest for the concept of body-ownership (i.e., the belief that a specific body part belongs to one's own body) pays specific attention to the relation between the perspective through which a body-part is observed and the possibility for the subjects to experience it as 239 part of their own body (i.e. ...
Thesis
Empathy allows us to understand and react to other people feelings. Regarding empathy for pain, a witness looking at a painful situation may react to other-oriented and prosocial-altruistic behaviors or self-oriented withdrawal responses. The main aim of this thesis was to study approach/avoidance and freezing behavioral manifestations that co-occurring along with both others’ pain observation and during the anticipation of pain. In two perspective-taking tasks, we investigated the influence of the type of relationship between the witness and the target in pain. Results showed that higher pain ratings, lower reactions times (experiment 1) and greater withdrawal avoidance postural responses (experiment 2) were attributed when participants adopted their most loved person perspective. In experiment 3, we analyzed the freezing behavior in the observer’s corticospinal system while subject was observing painful stimuli in first-and third-person perspectives. Results showed the pain-specific freezing effect only pertained to the first-person perspective condition. An empathy for pain interpretation suggests empathy might represent the anticipation of painful stimulation in oneself. In experiment 4 results, we found that the freezing effect present during a painful electrical stimulation was also present in the anticipation of pain. In conclusion, our studies suggest that cognitive perspective-taking mechanisms mainly modulate the empathic response and the most loved person perspective seems to be prevalent. In addition, more basic pain-specific corticospinal modulations are mainly present in the first-person perspective and it seems to not be referred to the empathy components
... Ehrsson et al. [22] suggested that visual feedback using a mirror is a cognitive technique that can help increase the motor function of stroke patients. In particular, interventions using mirrors, which provide visual feedback as if the movement of the non-affected limb reflected in the mirror is the movement of the affected limb and promote function [13], are effective in stroke rehabilitation. ...
... To this purpose, the research about body transfer illusion has also become a thought-provoking subject. As the closest existence in humans' perception, our bodies are supposedly difficult to be misunderstood, but there is still the illusion like the rubber hands illusion [13] [14]. The rubber hand illusion is based on a series of manipulation processes in a well-designed body perception process, and participants would finally mistakenly identify their own bodies. ...
Conference Paper
Full-text available
With the advancement of technology and the rapid internet development, people can now easily obtain information through diverse media. However, are we aware of the biased or even manipulated information? The presented techno-art work is built to provide a situated experience for the audience to reflect on technological mediation. In a dynamic process, illusions are created, experienced, explained (but failed), and, finally, transcended.
... It was shown that this manipulation not only influences the perceived source of a touch, i.e., coming from the rubber hand. Also, the neural activity in the premotor cortex reflects the subject's feeling of ownership of the hand (Ehrsson, 2004). ...
Preprint
Full-text available
In this paper we present a computational modeling account of an active self in artificial agents. In particular we focus on how an agent can be equipped with a sense of control and how it arises in autonomous situated action and, in turn, influences action control. We argue that this requires laying out an embodied cognitive model that combines bottom-up processes (sensorimotor learning and fine-grained adaptation of control) with top-down processes (cognitive processes for strategy selection and decision-making). We present such a conceptual computational architecture based on principles of predictive processing and free energy minimization. Using this general model, we describe how a sense of control can form across the levels of a control hierarchy and how this can support action control in an unpredictable environment. We present an implementation of this model as well as first evaluations in a simulated task scenario, in which an autonomous agent has to cope with un-/predictable situations and experiences corresponding sense of control. We explore different model parameter settings that lead to different ways of combining low-level and high-level action control. The results show the importance of appropriately weighting information in situations where the need for low/high-level action control varies and they demonstrate how the sense of control can facilitate this.
... * Survived the Benjamini-Hochberg procedure (FDR was set at 10%). www.nature.com/scientificreports/ of ownership of a fake/virtual body or body parts 1,13,68 , thus suggesting that the activity in this area reflects changes in the subjective feeling of body ownership. Interestingly, however, we failed to identify a relationship between the thickness of the left precentral region and the strength of the illusion, as measured by the questionnaire. ...
Article
Full-text available
The widely used rubber hand illusion (RHI) paradigm provides insight into how the brain manages conflicting multisensory information regarding bodily self-consciousness. Previous functional neuroimaging studies have revealed that the feeling of body ownership is linked to activity in the premotor cortex, the intraparietal areas, the occipitotemporal cortex, and the insula. The current study investigated whether the individual differences in the sensation of body ownership over a rubber hand, as measured by subjective report and the proprioceptive drift, are associated with structural brain differences in terms of cortical thickness in 67 healthy young adults. We found that individual differences measured by the subjective report of body ownership are associated with the cortical thickness in the somatosensory regions, the temporo-parietal junction, the intraparietal areas, and the occipitotemporal cortex, while the proprioceptive drift is linked to the premotor area and the anterior cingulate cortex. These results are in line with functional neuroimaging studies indicating that these areas are indeed involved in processes such as cognitive-affective perspective taking, visual processing of the body, and the experience of body ownership and bodily awareness. Consequently, these individual differences in the sensation of body ownership are pronounced in both functional and structural differences.
... Sense of ownership (SoO) is the feeling that parts of the body, or the entire body, belong to oneself (Gallagher, 2000). This subjective experience is generated from multisensory integration of visual, proprioceptive, and somatosensory information through comparisons between the visually perceived body and the anatomical model of the bodily self (Gallagher, 2000;Jeannerod, 2003;Ehrsson et al., 2004;Tsakiris, 2010). When one's own hand is observed in the appropriate position, as part of one's body, and can be moved according to one's own will, the hand can be clearly recognized as part of one's own body. ...
Article
Full-text available
The sense of body ownership, the feeling that one’s own body belongs to oneself, is generated from the integration of visual, tactile, and proprioceptive information. However, long-term non-use of parts of the body due to physical dysfunction caused by trauma or illness may disturb multisensory integration, resulting in a decreased sense of body ownership. The rubber hand illusion (RHI) is an experimental method of manipulating the sense of ownership (SoO). In this illusion, subjects feel as if the rubber hand in front of them were their own hand. The RHI elicits the disownership phenomenon; not only does the rubber hand feels like one’s own hand, but one’s own hand does not feel like one’s own hand. The decrease of ownership of one’s own body induced by the bodily illusion is accompanied by neurophysiological changes, such as attenuation of somatosensory evoked potential and decreases in skin temperature. If the loss of the SoO is associated with decreased neurophysiological function, the dysfunction of patients complaining of the loss of ownership can be exacerbated; appropriate rehabilitation prescriptions are urgently required. The present study attempted to induce a sense of disownership of subjects’ own hands using the RHI and investigated whether the tactile sensitivity threshold was altered by disownership. Via questionnaire, subjects reported a decrease of ownership after the RHI manipulation; at the same time, tactile sensitivity thresholds were shown to increase in tactile evaluation using the Semmes-Weinstein monofilaments test. The tactile detection rate changes before and after the RHI were negatively correlated with the disownership-score changes. These results show that subjects’ sense of disownership, that their own hands did not belong to them, led to decreases in tactile sensitivity. The study findings also suggest that manipulating of illusory ownership can be a tool for estimating the degree of exacerbation of sensory impairment in patients. Consideration of new interventions that optimize the sense of body ownership may contribute to new rehabilitation strategies for post-stroke sensory impairment.
... Depuis quelques années, certaines études ont proposé que les composants de la conscience corporelle de soi était un mécanisme clé de la représentation de l'EPP (Blanke, 2012 ;Makin et al., 2008 ;Serino et al., 2013 ;Tsakiris, 2010). En effet, l'établissement du sentiment d'appropriation à travers des expériences induisant des illusions portées sur la main dans lesquelles une main en caoutchouc était placée dans une posture compatible avec la vraie position du bras (appelée « rubber hand illusion » en anglais), serait corrélé à l'activité des régions neuronales de l'EPP, et en particulier au niveau du cortex pariétal prémoteur et postérieur Ehrsson et al., 2004Ehrsson et al., , 2005Makin et al., 2007). Une autre preuve a montré que si après quelques minutes où un participant percevait un contact sur le visage tout en voyant un visage étranger être touché de manière spatialement et temporellement cohérente créant une illusion sur le visage (« enfacement illusion » en anglais), l'activité des neurones de l'IPS augmentait (Apps et al., 2015). ...
Thesis
Même si nous percevons l'espace qui nous entoure comme un continuum cartésien, la région de l'espace près du corps où se déroulent les interactions physiques avec l'environnement est une région spéciale, appelée espace péri-personnel (EPP). L’EPP a d'abord été défini sur la base des propriétés de neurones enregistrés chez le singe dans des régions cérébrales prémotrices et pariétales spécifiques. Plus récemment, un réseau homologue putatif a été identifié chez l'homme en utilisant l’IRMf. La représentation de cet espace ne fait pas référence à une région bien délimitée avec des frontières claires mais est au contraire flexible, nous permettant d'adapter notre comportement en fonction du contexte. En particulier, le monde des hommes et des singes est avant tout un monde social. Dans ce monde social, une zone de confort est nécessaire pour réguler la distance entre soi et les autres et ainsi éviter l'inconfort, voire l'anxiété. Cependant, on sait encore peu de choses sur cette dimension sociale de l’EPP comparé à celle liée aux objets. Dans ce contexte, mon travail de thèse visait d'abord à combler le fossé entre les propriétés enregistrées dans les neurones individuels du singe et les activités cérébrales identifiées en neuroimagerie du réseau prémoteur-pariétal humain. Deuxièmement, il visait à apporter un nouvel éclairage sur la dimension sociale de l’EPP, un sujet qui a été largement négligé jusqu'à présent alors qu'il est de la plus haute importance pour tous les animaux. Pour répondre à ces questions, j'ai développé des protocoles utilisant un environnement de réalité virtuelle (RV) permettant une manipulation et un contrôle très précis des informations visuelles à différentes distances de notre corps. Pour réaliser mon premier objectif, j’ai utilisé des procédures expérimentales similaires chez l'homme et le singe afin de comparer l'activité cérébrale en IRMf. À travers deux tâches, où des objets réels ou virtuels étaient présentés à différentes distances (proche et éloignée) du corps, j'ai identifié un réseau prémoteur-pariétal homologue sous-jacent à la représentation de l’EPP chez les deux espèces. Pour réaliser mon deuxième objectif, j'ai utilisé une approche multi-échelle. Plus précisément, mon objectif était de comprendre comment les informations sociales (expressions faciales émotionnelles) dans notre EPP affectent nos capacités de perception, notre état physiologique, et notre activité cérébrale. Au niveau comportemental, mes résultats ont montré que nos capacités de discrimination visuelle étaient améliorées lorsque les visages émotionnels étaient présentés dans l’EPP par rapport à l'espace lointain, même lorsque la taille rétinienne était similaire pour les images proches et lointaines. Cette amélioration des capacités perceptives s'accompagnait d'une augmentation de la fréquence cardiaque lorsque les visages émotionnels étaient proches du corps. Enfin, au niveau neuronal, j'ai identifié un réseau occipito-prémoteur-pariétal avec une activité accrue en présence de visages émotionnels proches par rapport aux visages lointains. Mes résultats montrent également qu'un réseau commun code de manière similaire des stimuli sociaux et non sociaux dans l’EPP. Parallèlement à ce travail réalisé chez des volontaires sains, j'ai également établi un lien direct entre des lésions unilatérales médio-temporales et un déficit dans la régulation appropriée des distances sociales. En résumé, mes résultats démontrent que la présence sociale dans l’EPP facilite nos performances comportementales, augmente notre niveau de vigilance et recrute un réseau neuronal prémoteur-pariétal central quelque soit le type d’information (sociale ou non sociale). Ainsi, un réseau neuronal commun permettrait une réponse rapide, qui pourrait être principalement recruté dans n’importe quelle situation se produisant dans notre EPP, et des régions cérébrales supplémentaires pourraient entrer en jeu afin d’affiner notre comportement en fonction du contexte.
... First, recent studies related to body representation in the brain have focused on the frontoparietal network (Naito et al., 2007;Takeuchi et al., 2016), and we hypothesize that body-specific attention reflecting body representation in the brain is related to this network. This network includes brain regions related to the body in the brain, including body consciousness, which includes a sense of ownership and a sense of agency (Ehrsson et al., 2004;Naito et al., 2007;Gentile et al., 2013;Ohata et al., 2020). This frontoparietal network represents the self-body in the brain and contributes to the realization of efficient motor control and body cognition in humans. ...
Article
Full-text available
To execute the intended movement, the brain directs attention, called body-specific attention, to the body to obtain information useful for movement. Body-specific attention to the hands has been examined but not to the feet. We aimed to confirm the existence of body-specific attention to the hands and feet, and examine its relation to motor and sensory functions from a behavioral perspective. The study included two groups of 27 right-handed and right-footed healthy adults, respectively. Visual detection tasks were used to measure body-specific attention. We measured reaction times to visual stimuli on or off the self-body and calculated the index of body-specific attention score to subtract the reaction time on self-body from that off one. Participants were classified into low and high attention groups based on each left and right body-specific attention index. For motor functions, Experiment 1 comprised handgrip strength and ball-rotation tasks for the hands, and Experiment 2 comprised toe grip strength involved in postural control for the feet. For sensory functions, the tactile thresholds of the hands and feet were measured. The results showed that, in both hands, the reaction time to visual stimuli on the hand was significantly lesser than that offhand. In the foot, this facilitation effect was observed in the right foot but not the left, which showed the correlation between body-specific attention and the normalized toe gripping force, suggesting that body-specific attention affected postural control. In the hand, the number of rotations of the ball was higher in the high than in the low attention group, regardless of the elaboration exercise difficulty or the left or right hand. However, this relation was not observed in the handgripping task. Thus, body-specific attention to the hand is an important component of elaborate movements. The tactile threshold was higher in the high than in the low attention group, regardless of the side in hand and foot. The results suggested that more body-specific attention is directed to the limbs with lower tactile abilities, supporting the sensory information reaching the brain. Therefore, we suggested that body-specific attention regulates the sensory information to help motor control.
... The IPS and PMV have been reported to encode the hand position by integrating visual and proprioceptive signals (66,68). Moreover, activity in the PMV reflects individual differences in experienced artificial hand ownership (67,69). Unlike the IPS and PMV, the SPL and EBA have been found to encode changes in proprioceptive hand position in the dark, although these regions also responded to the position of a visible computergenerated hand (65,67). ...
Article
Full-text available
Purposeful motor actions depend on the brain’s representation of the body, called the body schema, and disorders of the body schema have been reported to show motor deficits. The body schema has been assumed for almost a century to be a common body representation supporting all types of motor actions, and previous studies have considered only a single motor action. Although we often execute multiple motor actions, how the body schema operates during such actions is unknown. To address this issue, I developed a technique to measure the body schema during multiple motor actions. Participants made simultaneous eye and reach movements to the same location of 10 landmarks on their hand. By analyzing the internal configuration of the locations of these points for each of the eye and reach movements, I produced maps of the mental representation of hand shape. Despite these two movements being simultaneously directed to the same bodily location, the resulting hand map (i.e., a part of the body schema) was much more distorted for reach movements than for eye movements. Furthermore, the weighting of visual and proprioceptive bodily cues to build up this part of the body schema differed for each effector. These results demonstrate that the body schema is organized as multiple effector-specific body representations. I propose that the choice of effector toward one’s body can determine which body representation in the brain is observed and that this visualization approach may offer a new way to understand patients’ body schema.
... What the body model constitutes of, however, remains unclear. Multiple studies on this issue [23][24][25][26][27][28][29][30] have discovered diverse results that are probably best explained by the functional body model hypothesis 1 , which hypothesizes that the entity a brain can embody is determined by whether it is recognized to sufficiently afford actions that the brain has learnt to associate with the limb it substitutes. This would, however, suggest that a new independent additional limb, by definition, will be difficult to produce perception of ownership since it will not have been associated www.nature.com/scientificreports/ ...
Article
Full-text available
Can our brain perceive a sense of ownership towards an independent supernumerary limb; one that can be moved independently of any other limb and provides its own independent movement feedback? Following the rubber-hand illusion experiment, a plethora of studies have shown that the human representation of “self” is very plastic. But previous studies have almost exclusively investigated ownership towards “substitute” artificial limbs, which are controlled by the movements of a real limb and/or limbs from which non-visual sensory feedback is provided on an existing limb. Here, to investigate whether the human brain can own an independent artificial limb, we first developed a novel independent robotic “sixth finger.” We allowed participants to train using the finger and examined whether it induced changes in the body representation using behavioral as well as cognitive measures. Our results suggest that unlike a substitute artificial limb (like in the rubber hand experiment), it is more difficult for humans to perceive a sense of ownership towards an independent limb. However, ownership does seem possible, as we observed clear tendencies of changes in the body representation that correlated with the cognitive reports of the sense of ownership. Our results provide the first evidence to show that an independent supernumerary limb can be embodied by humans.
... Recent evidence in neuroscience and psychology suggests that our body schema-i.e., the perceived map, shape, and posture of our body-is not fixed, but instead an adaptable plastic representation constructed by our multisensory experience (Ehrsson et al., 2004;Lenggenhager et al., 2007;Petkova, 2011;Gentile et al., 2013). Previous work has demonstrated that tools can be incorporated into the body schema (Farnè et al., 2005a;Miller, 2018), and that the length of our limbs, our body size, peripersonal space and body shape can be drastically altered given the right combination of tactile, motor, proprioceptive, and visual cues (Calzolari et al., 2017;Guterstam et al., 2018;Miller, 2018). ...
Article
Full-text available
In this study, we recreate the Pinocchio Illusion—a bodily illusion whereby the perceived length of one’s nose is extended—in Virtual Reality. Participants (n = 38) self-administered tapping on the tip of the nose of a virtual avatar seen from the first-person perspective (using a hand-held controller) while the nose of the avatar slowly grew with each tap. The stimulating virtual arm and the virtual nose were linked such that while the nose grew the arm extended, and then also grew up to 50%. This produced an extension of the perceived reach of the stimulating arm, and an outward drift in the participants’ real arm. A positive correlation between the extent of the outward drift of the participants’ arm and the perceived reachability of distal objects was observed. These results were found both with synchronous tactile stimulation on the participants’ real nose, and without, but not for control conditions in which the visuomotor synchrony or body schema were violated. These findings open new avenues for hand grasp interactions with virtual objects out of arm’s-reach in immersive setups and are discussed in the context of theories of body ownership, body schema, and touch perception.
... External stimuli can even alter the perception of our own body. The rubber hand illusion [Botvinick and Cohen, 1998] [Ehrsson et al., 2004] provokes an illusion of body transfer toward an artificial object (Video). A subject sitting at a table places his own hand hidden from view next to a realistic rubber hand. ...
Thesis
Full-text available
An ever-increasing number of human-machine interfaces have embraced touchscreens as their central component, such as in smartphones, laptops, terminals, etc. Their success has been particularly noticeable in the automotive industry, where physical buttons have been replaced by touchscreens to handle multiple elements of the driving environment. However, contrary to physical buttons, these interfaces do not possess any tangible elements that allows the user to feel where the commands are. Without tactile feedback, users have to rely on visual cues and simple adjustment tasks become significant distractions that may lead to dangerous situations while driving. Recently, haptic touchscreens have emerged to restore tangibility to these interfaces, by rendering the sensation of feeling textures and shapes through friction modulation. However, we still do not have a good understanding of how these synthetic textures are perceived by humans, which is crucial to design meaningful and intuitive haptic interfaces. In this thesis, I first show that the perception thresholds of friction modulated textures are similar to vibrotactile thresholds. Then, I investigate the perception of haptic gradients, i.e., textures whose spatial frequency gradually changes. Hence, a law that describes the minimal exploration distance to perceive a given gradient is deduced. This law is similar to the auditory perception of rhythm variations, which suggests that there are common mechanisms between the two modalities. Finally, I demonstrate that gradient haptic feedback can guide a user to adjust a setting on an interface without vision. The findings shed new light on the understanding of haptic perception and its multisensory interactions and open up new possibilities in terms of human-machine interaction.
... Technologies have been used to create multisensory conflicts in order to manipulate self-consciousness (Ionta et al., 2011). For example, the 'rubber hand illusion' revealed that if participants observe a rubber hand being stroked synchronously with their own hand, they tend to report self-attribution of the rubber hand (Botvinick and Cohen, 1998;Ehrsson et al., 2004). An important aspect of such bodily self-consciousness is self-location FIGURE 1 | Multisensory XR including (1) the reality-virtuality continuum from Milgram and Kishino (1994), enhanced by (2) sensory inputs, and considering (3) their level of congruency with the environment. ...
Article
Full-text available
The reality-virtuality continuum encompasses a multitude of objects, events, and environments ranging from real-world multisensory inputs to interactive multisensory virtual simulators, in which sensory integration can involve very different combinations of both physical and digital inputs. These different ways of stimulating the senses can affect the consumers’ consciousness, potentially altering their judgments and behaviours. In this perspective paper, we explore how technologies such as Augmented Reality (AR) and Virtual Reality (VR) can, by generating and modifying the human sensorium, act on consumer consciousness. We discuss the potential impact of this altered consciousness for consumer behaviour while, at the same time, considering how it may pave the way for further research.
... For example, the rubber hand illusion shows that misaligned visual input about the posture of a participant's arm Communicated by Melvyn A. Goodale. recalibrates their proprioceptive information, creating the strange impression that a rubber hand belongs to their body (Botvinick and Cohen 1998;Ehrsson et al. 2004). As another example, in the Pinocchio illusion, muscle spindle signals in the arm are manipulated through mechanical vibration, such that when a participant holds on to their nose, the latter appears to change in size (Lackner 1988). ...
Article
Full-text available
Hermosillo et al. (J Neurosci 31: 10019–10022, 2011) have suggested that action planning of hand movements impacts decisions about the temporal order judgments regarding vibrotactile stimulation of the hands. Specifically, these authors reported that the crossed-hand effect, a confusion about which hand is which when held in a crossed posture, gradually reverses some 320 ms before the arms begin to move from an uncrossed to a crossed posture or vice versa, such that the crossed-hand is reversed at the time of movement onset in anticipation of the movement’s end position. However, to date, no other study has attempted to replicate this dynamic crossed-hand effect. Therefore, in the present study, we conducted four experiments to revisit the question whether preparing uncrossed-to-crossed or crossed-to-uncrossed movements affects the temporo-spatial perception of tactile stimulation of the hands. We used a temporal order judgement (TOJ) task at different time stages during action planning to test whether TOJs are more difficult with crossed than uncrossed hands (“static crossed-hand effect”) and, crucially, whether planning to cross or uncross the hands shows the opposite pattern of difficulties (“dynamic crossed-hand effect”). As expected, our results confirmed the static crossed-hand effect. However, the dynamic crossed-hand effect could not be replicated. In addition, we observed that participants delayed their movements with late somatosensory stimulation from the TOJ task, even when the stimulations were meaningless, suggesting that the TOJ task resulted in cross-modal distractions. Whereas the current findings are not inconsistent with a contribution of motor signals to posture perception, they cast doubt on observations that motor signals impact state estimates well before movement onset.
... Thus, it is difficult to achieve embodiment when stimulus presented to artificial body parts (mainly visual feedback) is not consistent with sensory feedback (e.g., tactile, haptic, and proprioceptive) perceived through the natural body. It is also known that the sense of body ownership is restricted by spatial and anatomical consistency [17,34,97]. ...
Conference Paper
Full-text available
We propose a concept called “JIZAI Body” that allows each person to live the way they wish to live in society. One who acquires a JIZAI Body can (simultaneously) control (or delegate control) of their natural body and extensions of it, both in physical and cyberspace. We begin by describing the JIZAI Body and the associated JIZAI state in more detail.We then provide a review of the literature, focusing on human augmentation and cybernetics, robotics and virtual reality, neuro and cognitive sciences, and the humanities; fields which are necessary for the conception, design, and understanding of the JIZAI Body. We then illustrate the five key aspects of a JIZAI Body through existing works. Finally, we present a series of example scenarios to suggest what a JIZAI society may look like. Overall, we present the JIZAI Body as a preferred state to aspire towards when developing and designing augmented humans.
Article
In this paper we present a computational modeling account of an active self in artificial agents. In particular we focus on how an agent can be equipped with a sense of control and how it arises in autonomous situated action and, in turn, influences action control. We argue that this requires laying out an embodied cognitive model that combines bottom-up processes (sensorimotor learning and fine-grained adaptation of control) with top-down processes (cognitive processes for strategy selection and decision-making). We present such a conceptual computational architecture based on principles of predictive processing and free energy minimization. Using this general model, we describe how a sense of control can form across the levels of a control hierarchy and how this can support action control in an unpredictable environment. We present an implementation of this model as well as first evaluations in a simulated task scenario, in which an autonomous agent has to cope with un-/predictable situations and experiences corresponding sense of control. We explore different model parameter settings that lead to different ways of combining low-level and high-level action control. The results show the importance of appropriately weighting information in situations where the need for low/high-level action control varies and they demonstrate how the sense of control can facilitate this.
Article
Full-text available
The perception of our body in space is flexible and manipulable. The predictive brain hypothesis explains this malleability as a consequence of the interplay between incoming sensory information and our body expectations. However, given the interaction between perception and action, we might also expect that actions would arise due to prediction errors, especially in conflicting situations. Here we describe a computational model, based on the free-energy principle, that forecasts involuntary movements in sensorimotor conflicts. We experimentally confirm those predictions in humans using a virtual reality rubber-hand illusion. Participants generated movements (forces) towards the virtual hand, regardless of its location with respect to the real arm, with little to no forces produced when the virtual hand overlaid their physical hand. The congruency of our model predictions and human observations indicates that the brain-body is generating actions to reduce the prediction error between the expected arm location and the new visual arm. This observed unconscious mechanism is an empirical validation of the perception–action duality in body adaptation to uncertain situations and evidence of the active component of predictive processing.
Chapter
Full-text available
The realism of experience of using a virtual body in virtual reality (VR) is associated with ownership (one’s self-attribution of a body) has been shown. However, whether or not ownership can be elicited for a virtual body presented by the third-person perspective is under debate. This study investigated the effect of multimodal presentations on the ownership of a male virtual body presented in the third-person perspective in three conditions (Visuo-tactile, visuo-motor, visuo-motor-tactile condition) (N = 40). We compared the illusory effect of ownership in the three conditions in male and female participants using a questionnaire and a 2 × 3 mixed-design ANOVA. Our study revealed that the male participants in the visuo-motor-tactile condition affirmed moderate (+1) of ownership, but the female participants did not. Ownership was significantly higher in the visuo-tactile (p < .01) and visuo-motor (p < .05) conditions. The results suggest that both visuo-motor synchrony and visuo-tactile feedback are essential factors to induce ownership to the virtual body in a third-person perspective. Moreover, our data suggest that matching the participant’s gender identity and the appearance of an avatar’s gender might be important for elicited ownership. Additionally, we evaluated the agency on the virtual body in the third-person perspective in the same three feedback conditions and found that only visuo-motor feedback is essential to elicit agency, unlike the causal factors of the ownership.
Article
Motivated by a set of converging empirical findings and theoretical suggestions pertaining to the construct of ownership, we survey literature from multiple disciplines and present an extensive theoretical account linking the inception of a foundational naïve theory of ownership to principles governing the sense of (body) ownership. The first part of the account examines the emergence of the non-conceptual sense of ownership in terms of the minimal self and the body schema—a dynamic mental model of the body that functions as an instrument of directed action. A remarkable feature of the body schema is that it expands to incorporate objects that are objectively controlled by the person. Moreover, this embodiment of extracorporeal objects is accompanied by the phenomenological feeling of ownership towards the embodied objects. In fact, we argue that the sense of agency and ownership are inextricably linked, and that predictable control over an object can engender the sense of ownership. This relation between objective agency and the sense of ownership is moderated by gestalt-like principles. In the second part, we posit that these early emerging principles and experiences lead to the formation of a naïve theory of ownership rooted in notions of agential involvement.
Article
Understanding of the brain and the principles governing neural processing requires theories that are parsimonious, can account for a diverse set of phenomena, and can make testable predictions. Here, we review the theory of Bayesian causal inference, which has been tested, refined, and extended in a variety of tasks in humans and other primates by several research groups. Bayesian causal inference is normative and has explained human behavior in a vast number of tasks including unisensory and multisensory perceptual tasks, sensorimotor, and motor tasks, and has accounted for counter-intuitive findings. The theory has made novel predictions that have been tested and confirmed empirically, and recent studies have started to map its algorithms and neural implementation in the human brain. The parsimony, the diversity of the phenomena that the model has explained, and its illuminating brain function at all three of Marr’s levels of analysis make Bayesian causal inference a strong neuroscience theory. This also highlights the importance of collaborative and multi-disciplinary research for the development of new theories in neuroscience.
Article
Full-text available
The brain mechanisms underlying the emergence of a normal sense of body ownership can be investigated starting from pathological conditions in which body awareness is selectively impaired. Here, we focused on pathological embodiment, a body ownership disturbance observed in brain-damaged patients who misidentify other people’s limbs as their own. We investigated whether such body ownership disturbance can be classified as a disconnection syndrome, using three different approaches based on diffusion tensor imaging: a) reconstruction of disconnectome maps in a large sample (N = 70) of stroke patients with and without pathological embodiment; b) probabilistic tractography, performed on age-matched healthy controls (N = 16), to trace cortical connections potentially interrupted in patients with pathological embodiment and spared in patients without this pathological condition; c) probabilistic “in vivo” tractography on two patients without and one patient with pathological embodiment. The converging results revealed the arcuate fasciculus and the third branch of the superior longitudinal fasciculus as mainly involved fiber tracts in patients showing pathological embodiment, suggesting that this condition could be related to the disconnection between frontal, parietal, and temporal areas. This evidence raises the possibility of a ventral self-body recognition route including regions where visual (computed in occipito-temporal areas) and sensorimotor (stored in premotor and parietal areas) body representations are integrated, giving rise to a normal sense of body ownership.
Preprint
Full-text available
Natural perception relies inherently on inferring causal structure in the environment. However, the neural mechanisms and functional circuits that are essential for representing and updating the hidden causal structure and corresponding sensory representations during multisensory processing are unknown. To address this, monkeys were trained to infer the probability of a potential common source from visual and proprioceptive signals on the basis of their spatial disparity in a virtual reality system. The proprioceptive drift reported by monkeys demonstrated that they combined historical information and current multisensory signals to estimate the hidden common source and subsequently updated both the causal structure and sensory representation. Single-unit recordings in premotor and parietal cortices revealed that neural activity in premotor cortex represents the core computation of causal inference, characterizing the estimation and update of the likelihood of integrating multiple sensory inputs at a trial-by-trial level. In response to signals from premotor cortex, neural activity in parietal cortex also represents the causal structure and further dynamically updates the sensory representation to maintain consistency with the causal inference structure. Thus, our results indicate how premotor cortex integrates historical information and sensory inputs to infer hidden variables and selectively updates sensory representations in parietal cortex to support behavior. This dynamic loop of frontal-parietal interactions in the causal inference framework may provide the neural mechanism to answer long-standing questions regarding how neural circuits represent hidden structures for body-awareness and agency.
Article
Full-text available
Our bodies provide a necessary scaffold for memories of past events. Yet, we are just beginning to understand how feelings of one’s own body during the encoding of realistic events shape memory. Participants formed memories for immersive, lifelike events by watching pre-recorded 3D videos that involved a first-person view of a mannequin’s body through head mounted displays. We manipulated feelings of body ownership over the mannequin using a perceptual full-body illusion. Participants completed cued recall questions and subjective ratings (i.e. degree of reliving, emotional intensity, vividness, and belief in memory accuracy) for each video immediately following encoding and one week later. Sensing the mannequin’s body as one’s own during encoding enhanced memory accuracy across testing points, immediate reliving, and delayed emotional intensity, vividness, and belief in memory accuracy. These findings demonstrate that a basic sense of bodily selfhood provides a crucial foundation for the accurate reliving of the past.
Article
Full-text available
The sense of body ownership (the feeling that the body belongs to the self) is commonly believed to arise through multisensory integration. This is famously shown in the rubber hand illusion (RHI), where touches applied synchronously to a fake hand and to the participant’s real hand (which is hidden from view) can induce a sensation of ownership over the fake one. Asynchronous touches weaken or abolish the illusion, and are typically used as a control condition. Subjective experience during the illusion is measured using a questionnaire, with some statements designed to capture illusory sensation and others designed as controls. However, recent work by Lush (2020, Collabra: Psychology) claimed that participants may have different expectations for questionnaire items in the synchronous condition compared to the asynchronous condition, and for the illusion-related items compared to the control items. This may mean that the classic RHI questionnaire is poorly controlled for demand characteristics. In the current work a conceptual replication of Lush (2020) was performed. Participants were presented with a video of the RHI procedure and reported the sensations they would expect to experience, both in free response and by rating questionnaire items. Participants had greater expectations for illusion statements in the synchronous condition compared to the asynchronous condition, and for illusion statements compared to control statements. However, free responses suggested that such expectations may be at least partially driven by exposure to the questionnaire items. Further work is necessary to understand whether similar expectations exist for the true RHI procedure, what might drive them, and whether they have an impact on reported RHI experience.
Article
Full-text available
Many studies have reported that bottom-up multisensory integration of visual, tactile, and proprioceptive information can distort our sense of body-ownership, producing rubber hand illusion (RHI). There is less evidence about when and how the body-ownership is distorted in the brain during RHI. To examine whether this illusion effect occurs preattentively at an early stage of processing, we monitored the visual mismatch negativity (vMMN) component (the index of automatic deviant detection) and N2 (the index for conflict monitoring). Participants first performed an RHI elicitation task in a synchronous or asynchronous setting and then finished a passive visual oddball task in which the deviant stimuli were unrelated to the explicit task. A significant interaction between Deviancy (deviant hand vs. standard hand) and Group (synchronous vs. asynchronous) was found. The asynchronous group showed clear mismatch effects in both vMMN and N2, while the synchronous group had such effect only in N2. The results indicate that after the elicitation of RHI bottom-up integration could be retrieved at the early stage of sensory processing before top-down processing, providing evidence for the priority of the bottom-up processes after the generation of RHI and revealing the mechanism of how the body-ownership is unconsciously distorted in the brain.
Article
Full-text available
The neurophysiological processes reflecting body illusions such as the rubber hand remain debated. Previous studies investigating the neural responses evoked by the illusion-inducing stimulation have provided diverging reports as to when these responses reflect the illusory state of the artificial limb becoming embodied. One reason for these diverging reports may be that different studies contrasted different experimental conditions to isolate potential correlates of the illusion, but individual contrasts may reflect multiple facets of the adopted experimental paradigm and not just the illusory state. To resolve these controversies, we recorded EEG responses in human participants and combined multivariate (cross-)classification with multiple Illusion and non-Illusion conditions. These conditions were designed to probe for markers of the illusory state that generalize across the spatial arrangements of limbs or the specific nature of the control object (a rubber hand or participant's real hand), hence which are independent of the precise experimental conditions used as contrast for the illusion. Our results reveal a parcellation of evoked responses into a temporal sequence of events. Around 125 and 275 ms following stimulus onset, the neurophysiological signals reliably differentiate the illusory state from non-Illusion epochs. These results consolidate previous work by demonstrating multiple neurophysiological correlates of the rubber hand illusion and illustrate how multivariate approaches can help pinpointing those that are independent of the precise experimental configuration used to induce the illusion.
Conference Paper
Full-text available
The human body movement is idiosyncratic and always challenging to document. This research explores ephemeral body movements and their documentation through the body-machine interface. The aim is to explore the body's aesthetics and find possibilities of the use of the body as a moving structure in fashion design processes. These ephemeral body movements are documented through an installation setup consisting of a computer, posenet (the machine learning model), camera both video and computer screen. This reflects on how the machine interface translates the body and its movement through embodied interaction. The human body is translated into seventeen dots forming new bodily expressions based on the spatial level and structural characteristics of it while moving. The installation setup serves as a method in general or in fashion design processes to create alternative design expressions.
Chapter
Neuroplasticity, i.e., the modifiability of the brain, is different in development and adulthood. The first includes changes in: (i) neurogenesis and control of neuron number; (ii) neuronal migration; (iii) differentiation of the somato-dendritic and axonal phenotypes; (iv) formation of connections; (v) cytoarchitectonic differentiation. These changes are often interrelated and can lead to: (vi) system-wide modifications of brain structure as well as to (vii) acquisition of specific functions such as ocular dominance or language. Myelination appears to be plastic both in development and adulthood, at least, in rodents. Adult neuroplasticity is limited, and is mainly expressed as changes in the strength of excitatory and inhibitory synapses while the attempts to regenerate connections have met with limited success. The outcomes of neuroplasticity are not necessarily adaptive, but can also be the cause of neurological and psychiatric pathologies.
Article
Previous studies have shown that the sensory modality used to identify regions of the body hidden from sight, but frequently viewed, influences the type of the body representation employed for reaching them with the finger. The question then arises as to whether this observation also applies to body regions which are rarely, if ever, viewed. We used an established technique for pinpointing the type of body representation used for the spatial encoding of targets which consisted of assessing the effect of peripheral gaze fixation on the pointing accuracy. More precisely, an exteroceptive, visually dependent, body representation is thought to be used if gaze deviation induces a deviation of the pointing movement. Three light-emitting diodes (LEDs) were positioned at the participants’ eye level at -25 deg, 0 deg and +25 deg. Without moving the head, the participant fixated the lit LED before the experimenter indicated one of the three target head positions: topmost point of the head (vertex) and two other points located at the front and back of the head. These targets were either verbal-cued or tactile-cued and the participants had to reach them with their index finger. We analysed the accuracy of the movements directed to the topmost point of the head, which is a well-defined, yet out of view anatomical point. Based on the possibility of the brain to create visual representations of the body areas that remain out of view, we hypothesized that the position of the vertex is encoded using an exteroceptive body representation, both when verbally or tactile-cued. Results revealed that the pointing errors were biased in the opposite direction of gaze fixation for both verbal-cued and tactile-cued targets, suggesting the use of a vision-dependent exteroceptive body representation. The enhancement of the visual body representations by sensorimotor processes was suggested by the greater pointing accuracy when the vertex was identified by tactile stimulation compared to verbal instruction. Moreover, a control condition showed that participants were more accurate in indicating the position of their own vertex than the vertex of other people. Together, our results suggest that the position of rarely viewed body parts are spatially encoded by an exteroceptive body representation and that non-visual sensorimotor processes are involved in the constructing of this representation.
Article
Приводятся результаты анализа связи самоповреждающего поведения и различных показателей телесных представлений и ощущений у подростков и молодых женщин, страдающих депрессией. В исследовании приняли участие 85 пациенток в возрасте от 16 до 25 лет, страдающие эндогенной депрессией. Использовались опросник SCL-90-R, Шкала инвестиций в тело (BIS), Шкала «Сравнения с окружающими» (PACS-R), Шкала удовлетворенности телом (BSS), Кембриджская шкала деперсонализации (CDS). Ответ на вопрос «Иногда я намеренно травмирую себя» использовался в качестве показателя причинения себе вреда. Выявлена связь самоповреждающего поведения и эмоциональных, когнитивных, поведенческих особенностей восприятия своего тела: более негативный образ тела (неудовлетворенность его отдельными частями и телом в целом) находит отражение в поведенческих проявлениях – сниженная «Защита», более высокие показатели самонаблюдения и сравнения себя с другими, деперсонализации, телесной диссоциации, соматизации. Для молодых женщин с депрессиями показано, что при самоповреждениях тело «обесценивается», воспринимается как «плохое», игнорируется необходимость его защиты. Выраженность самоповреждений напрямую коррелирует с явлениями соматопсихической деперсонализации. Полученные результаты могут свидетельствовать о том, что неприятие своего тела, «отчужденное» отношение и лишение тела «субъектности» может способствовать его использованию в качестве инструмента для решения психологических проблем, что является фактором риска развития, закрепления и утяжеления самоповреждающего поведения. При психотерапевтическом воздействии важно рассматривать возможность работы с патологией восприятия тела как дополнение к работе со способностью к эмоциональной регуляции.
Article
Full-text available
Our body has evolved in terrestrial gravity and altered gravitational conditions may affect the sense of body ownership (SBO). By means of the rubber hand illusion (RHI), we investigated the SBO during water immersion and parabolic flights, where unconventional gravity is experienced. Our results show that unconventional gravity conditions remodulate the relative weights of visual, proprioceptive, and vestibular inputs favoring vision, thus inducing an increased RHI susceptibility.
Article
Full-text available
Previous functional neuroimaging studies have implicated a fronto-parietal network in the control of visual attention. Here, we describe a series of attentional cueing studies which further investigate how this network allocates attentional resources to behaviorally relevant stimuli. In the first study, this network was shown to be involved in both spatial and non-spatial attention, with certain areas playing a greater role in spatial attention. The second study revealed that medial subregions of this network play a specific role in attentional orienting, while lateral subregions participate in more general aspects of cue processing, such as cue interpretation. A follow-up ERP cueing study demonstrated that frontal activity precedes parietal, consistent with theories that frontal regions communicate task goals to posterior brain areas. Another pair of cueing studies investigated how target discrimination difficulty modulates attention-related brain activity. Discrimination difficulty increased target-related activity; however, fronto- parietal responses to cues providing advance information as to the likely difficulty of an upcoming target discrimination did not vary as a function of expected difficulty. Lastly, in a study of attention to the global vs. local features of visual objects, the presence of distracting information during target discrimination resulted in reactivation of portions of the attentional orienting network.
Article
Full-text available
Four studies investigated 29 3-mo-old (Exp IV) and 60 5-mo-old (Exps I–III) infants' capacity to detect proprioceptive–visual relations uniting self-motion with a visual display of that motion. Previous research has shown that 5-mo-old infants can detect the invariant relationship between their own leg motion and a video display of that motion. The 1st 3 experiments showed that the 5-mo-olds discriminated between a perfectly contingent live display of their own leg motion and a noncontingent display of self or a peer. They showed this discrimination by preferential fixation of the noncontingent display. This effect was evident even when an S's direct view of his/her own body was occluded, eliminating video image discrimination on the basis of an intramodal visual comparison between the sight of self-motion and the video display of that motion. These results suggest that the contingency provided by a live display of one's body motion is perceived by detecting the invariant intermodal relationship between proprioceptive information for motion and the visual display of that motion. The detection of these relations may be fundamental to the development of self-perception in infancy. Although 3-mo-olds did not show significant discrimination of the contingent and noncontingent displays in Exp IV, they did show significantly more extreme looking proportions to the 2 displays than did the 5-mo-olds. This may reflect the infant's progression from a self- to a social orientation. (32 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
In primates, the premotor cortex is involved in the sensory guidance of movement. Many neurons in ventral premotor cortex respond to visual stimuli in the space adjacent to the hand or arm. These visual receptive fields were found to move when the arm moved but not when the eye moved; that is, they are in arm-centered, not retinocentric, coordinates. Thus, they provide a representation of space near the body that may be useful for the visual control of reaching.
Article
Full-text available
The functional and structural properties of the dorsolateral frontal lobe and posterior parietal proximal arm representations were studied in macaque monkeys. Physiological mapping of primary motor (MI), dorsal premotor (PMd), and posterior parietal (area 5) cortices was performed in behaving monkeys trained in an instructed-delay reaching task. The parietofrontal corticocorticat connectivities of these same areas were subsequently examined anatomically by means of retrograde tracing techniques. Signal-, set-, movement-, and position-related directional neuronal activities were distributed nonuniformly within the task-related areas in both frontal and parietal cortices. Within the frontal lobe, moving caudally from PMd to the MI, the activity that signals for the visuospatial events leading to target localization decreased, while the activity more directly linked to movement generation increased. Physiological recordings in the superior parietal lobule revealed a gradient-like distribution of functional properties similar to that observed in the frontal lobe. Signal- and set-related activities were encountered more frequently in the intermediate and ventral part of the medial bank of the intraparietal sulcus (IPS), in area MIP. Movement- and position-related activities were distributed more uniformly within the superior parietal lobule (SPL), in both dorsal area 5 and in MIP. Frontal and parietal regions sharing similar functional properties were preferentially connected through their association pathways. As a result of this study, area MIP, and possibly areas MDP and 7m as well, emerge as the parietal nodes by which visual information may be relayed to the frontal lobe arm region. These parietal and frontal areas, along with their association connections, represent a potential cortical network for visual reaching. The architecture of this network is ideal for coding reaching as the result of a combination between visual and somatic information.
Article
Full-text available
Recent experiments are reviewed that indicate that sensory signals from many modalities, as well as efference copy signals from motor structures, converge in the posterior parietal cortex in order to code the spatial locations of goals for movement. These signals are combined using a specific gain mechanism that enables the different coordinate frames of the various input signals to be combined into common, distributed spatial representations. These distributed representations can be used to convert the sensory locations of stimuli into the appropriate motor coordinates required for making directed movements. Within these spatial representations of the posterior parietal cortex are neural activities related to higher cognitive functions, including attention. We review recent studies showing that the encoding of intentions to make movements is also among the cognitive functions of this area.
Article
Full-text available
In macaque ventral premotor cortex, we recorded the activity of neurons that responded to both visual and tactile stimuli. For these bimodal cells, the visual receptive field extended from the tactile receptive field into the adjacent space. Their tactile receptive fields were organized topographically, with the arms represented medially, the face represented in the middle, and the inside of the mouth represented laterally. For many neurons, both the visual and tactile responses were directionally selective, although many neurons also responded to stationary stimuli. In the awake monkeys, for 70% of bimodal neurons with a tactile response on the arm, the visual receptive field moved when the arm was moved. In contrast, for 0% the visual receptive field moved when the eye or head moved. Thus the visual receptive fields of most "arm + visual" cells were anchored to the arm, not to the eye or head. In the anesthetized monkey, the effect of arm position was similar. For 95% of bimodal neurons with a tactile response on the face, the visual receptive field moved as the head was rotated. In contrast, for 15% the visual receptive field moved with the eye and for 0% it moved with the arm. Thus the visual receptive fields of most "face + visual" cells were anchored to the head, not to the eye or arm. To construct a visual receptive field anchored to the arm, it is necessary to integrate the position of the arm, head, and eye. For arm + visual cells, the spontaneous activity, the magnitude of the visual response, and sometimes both were modulated by the position of the arm (37%), the head (75%), and the eye (58%). In contrast, to construct a visual receptive field that is anchored to the head, it is necessary to use the position of the eye, but not of the head or the arm. For face + visual cells, the spontaneous activity and/or response magnitude was modulated by the position of the eyes (88%), but not of the head or the arm (0%). Visual receptive fields anchored to the arm can encode stimulus location in "arm-centered" coordinates, and would be useful for guiding arm movements. Visual receptive fields anchored to the head can likewise encode stimuli in "head-centered" coordinates, useful for guiding head movements. Sixty-three percent of face + visual neurons responded during voluntary movements of the head. We suggest that "body-part-centered" coordinates provide a general solution to a problem of sensory-motor integration: sensory stimuli are located in a coordinate system anchored to a particular body part.
Article
Full-text available
A central problem in motor control, in the representation of space, and in the perception of body schema is how the brain encodes the relative positions of body parts. According to psychophysical studies, this sense of limb position depends heavily on vision. However, almost nothing is currently known about how the brain uses vision to determine or represent the location of the arm or any other body part. The present experiment shows that the position of the arm is represented in the premotor cortex of the monkey (Macaca fascicularis) brain by means of a convergence of visual cues and proprioceptive cues onto the same neurons. These neurons respond to the felt position of the arm when the arm is covered from view. They also respond in a similar fashion to the seen position of a false arm.
Article
Full-text available
In the last few years, anatomical and physiological studies have provided new insights into the organization of the parieto-frontal network underlying visually guided arm-reaching movements in at least three domains. (1) Network architecture. It has been shown that the different classes of neurons encoding information relevant to reaching are not confined within individual cortical areas, but are common to different areas, which are generally linked by reciprocal association connections. (2) Representation of information. There is evidence suggesting that reach-related populations of neurons do not encode relevant parameters within pure sensory or motor "reference frames", but rather combine them within hybrid dimensions. (3) Visuomotor transformation. It has been proposed that the computation of motor commands for reaching occurs as a simultaneous recruitment of discrete populations of neurons sharing similar properties in different cortical areas, rather than as a serial process from vision to movement, engaging different areas at different times. The goal of this paper was to link experimental (neurophysiological and neuroanatomical) and computational aspects within an integrated framework to illustrate how different neuronal populations in the parieto-frontal network operate a collective and distributed computation for reaching. In this framework, all dynamic (tuning, combinatorial, computational) properties of units are determined by their location relative to three main functional axes of the network, the visual-to-somatic, position-direction, and sensory-motor axis. The visual-to-somatic axis is defined by gradients of activity symmetrical to the central sulcus and distributed over both frontal and parietal cortices. At least four sets of reach-related signals (retinal, gaze, arm position/movement direction, muscle output) are represented along this axis. This architecture defines informational domains where neurons combine different inputs. The position-direction axis is identified by the regular distribution of information over large populations of neurons processing both positional and directional signals (concerning the arm, gaze, visual stimuli, etc.) Therefore, the activity of gaze- and arm-related neurons can represent virtual three-dimensional (3D) pathways for gaze shifts or hand movement. Virtual 3D pathways are thus defined by a combination of directional and positional information. The sensory-motor axis is defined by neurons displaying different temporal relationships with the different reach-related signals, such as target presentation, preparation for intended arm movement, onset of movements, etc. These properties reflect the computation performed by local networks, which are formed by two types of processing units: matching and condition units. Matching units relate different neural representations of virtual 3D pathways for gaze or hand, and can predict motor commands and their sensory consequences. Depending on the units involved, different matching operations can be learned in the network, resulting in the acquisition of different visuo-motor transformations, such as those underlying reaching to foveated targets, reaching to extrafoveal targets, and visual tracking of hand movement trajectory. Condition units link these matching operations to reinforcement contingencies and therefore can shape the collective neural recruitment along the three axes of the network. This will result in a progressive match of retinal, gaze, arm, and muscle signals suitable for moving the hand toward the target.
Article
Full-text available
Area 5 in the parietal lobe of the primate brain is thought to be involved in monitoring the posture and movement of the body. In this study, neurons in monkey area 5 were found to encode the position of the monkey's arm while it was covered from view. The same neurons also responded to the position of a visible, realistic false arm. The neurons were not sensitive to the sight of unrealistic substitutes for the arm and were able to distinguish a right from a left arm. These neurons appear to combine visual and somatosensory signals in order to monitor the configuration of the limbs. They could form the basis of the complex body schema that we constantly use to adjust posture and guide movement.
Article
Full-text available
Functional magnetic resonance imaging (fMRI) is widely used to study the operational organization of the human brain, but the exact relationship between the measured fMRI signal and the underlying neural activity is unclear. Here we present simultaneous intracortical recordings of neural signals and fMRI responses. We compared local field potentials (LFPs), single- and multi-unit spiking activity with highly spatio-temporally resolved blood-oxygen-level-dependent (BOLD) fMRI responses from the visual cortex of monkeys. The largest magnitude changes were observed in LFPs, which at recording sites characterized by transient responses were the only signal that significantly correlated with the haemodynamic response. Linear systems analysis on a trial-by-trial basis showed that the impulse response of the neurovascular system is both animal- and site-specific, and that LFPs yield a better estimate of BOLD responses than the multi-unit responses. These findings suggest that the BOLD contrast mechanism reflects the input and intracortical processing of a given area rather than its spiking output.
Article
Full-text available
Subjects perceived touch sensations as arising from a table (or a rubber hand) when both the table (or the rubber hand) and their own real hand were repeatedly tapped and stroked in synchrony with the real hand hidden from view. If the table or rubber hand was then 'injured', subjects displayed a strong skin conductance response (SCR) even though nothing was done to the real hand. Sensations could even be projected to anatomically impossible locations. The illusion was much less vivid, as indicated by subjective reports and SCR, if the real hand was simultaneously visible during stroking, or if the real hand was hidden but touched asynchronously. The fact that the illusion could be significantly diminished when the real hand was simultaneously visible suggests that the illusion and associated SCRs were due to perceptual assimilation of the table (or rubber hand) into one's body image rather than associative conditioning. These experiments demonstrate the malleability of body image and the brain's remarkable capacity for detecting statistical correlations in the sensory input.
Article
The visual responses of single neurons of the periarcuate cortex have been studied in the macaque monkey. Two sets of neurons responding to visual stimuli have been found. The first set, located rostral to the arcuate sulcus, was formed by units that could be activated by stimuli presented far from the animal. These neurons had large receptive fields and were neither orientation nor direction selective. The second set, found predominantly caudal to the arcuate sulcus, was formed by units that were maximally or even exclusively activated by stimuli presented in the space immediately around the animal. These neurons were bimodal, responding also to somatosensory stimuli.According to the location of their visual responding regions the bimodal neurons were subdivided into pericutaneous (54%) and distant peripersonal neurons (46%). The former responded best to stimuli presented a few centimeters from the skin, the latter to stimuli within the animal's reaching distance. The visual responding regions were spatially related to the tactile fields.It is argued that neurons with a receptive field consisting of several responding areas, some in one sensory modality, some in another, have a praxic function and that they are involved in organizing sequences of movements.
Article
Illusions have historically been of great use to psychology for what they can reveal about perceptual processes. We report here an illusion in which tactile sensations are referred to an alien limb. The effect reveals a three-way interaction between vision, touch and proprioception, and may supply evidence concerning the basis of bodily self-identification.
Article
Anterograde and retrograde transport methods were used to study the corticocortical connectivity of areas 3a, 3b, 1, 2, 5, 4 and 6 of the monkey cerebral cortex. Fields were identified by cytoarchitectonic features and by thalamic connectivity in the same brains. Area 3a was identified by first recording a short latency group I afferent evoked potential. Attempts were made to analyze the data in terms of: (1) routes whereby somatic sensory input might influence the performance of motor cortex neurons; (2) possible multiple representations of the body surface in the component fields of the first somatic sensory area (SI).
Article
The definition of visual areas remains a key problem in the effort to elucidate cortical functions. Visual areas vary along a number of dimensions and are increasingly difficult to define according to traditional criteria at higher levels of the hierarchy. Three recently discovered areas in monkey parietal association cortex illustrate a new approach to this problem. Their definition depends on assessment of neuronal response properties in the alert, behaving animal combined with precise reconstruction of recording sites. This approach permits recognition of functionally distinct areas in the absence of retinotopic maps.
Article
The response of individual neurons within Brodmann's area 5 of the parietal lobe to physiological somesthetic stimuli was studied in unanesthetized rhesus monkeys. Most neurons were sensitive to light mechanical stimulation of the skin and deep tissue and/or joint rotation.These neurons were distinguishable from those of the primary somatosensory area by several characteristics: multiple joint interaction, ipsilateral receptive field, bilateral interaction, interaction between forelimb and hindlimb or trunk, and excitatory interaction of joint and skin stimuli. Some were highly selective, in that they responded only to certain critical patterns of stimuli involving both joint and skin. The results were viewed as supporting the hypothesis that area 5 is the site of higher order processing of somesthetic information received from the lemniscal system, and may give rise to the neural code of position and form of body and tactile objects in 3-dimensional space.
Article
The receptive fields of single cells in area 5 of monkey parietal cortex were studied by extracellular recording. Cells were driven primarily by gentle manipulation of multiple joints residing on one or more limbs. Both excitatory and inhibitory convergence were demonstrated. It is postulated that the multijoint receptive fields of area 5 are the result of convergence from single-joint cells of the primary receiving area. An analogy is drawn between the modification of information in the visual and somatosensory systems.
Article
The afferent properties of single neurons of the periarcuate cortex have been studied in the macaque monkey. Most of the recorded neurons responded to stimuli in one or two sensory modalities and, accordingly, they were classified as somatosensory, visual or bimodal (visual and somatosensory) neurons. Visual neurons were located rostral to the arcuate sulcus, whereas the somatosensory and the bimodal neurons were found predominantly caudal to this sulcus. Somatosensory neurons (n = 102) and bimodal neurons (n = 69) had identical somatic afferent properties. They were subdivided into 'tactile' neurons, 'joint' neurons and 'tactile and joint' neurons. 'Tactile' neurons (70%) had their receptive fields formed either by one or by two or more spatially separated responding areas. The parts of the body most represented were the hands and the mouth. 'Joint' neurons (10%) were activated by the rotation of one or, more often, of two or more articulations. The movement of the hand towards the mouth was the most frequently represented movement. 'Tactile and joint' neurons (20%) responded to both tactile and joint stimulation having receptive field locations and properties like those of the other two classes of neurons. Some 'joint' and 'tactile and joint' neurons had summing properties, i.e. their response to tactile or joint stimulation was conditional upon a simultaneous stimulation of another articulation. The data are interpreted as evidence in favor of the existence of an area in the agranular cortex that organizes the mouth and the hand to mouth movements.
Article
Positron emission tomography (PET) was used to identify the brain areas involved in visually guided reaching by measuring regional cerebral blood flow (rCBF) in six normal volunteers while they were fixating centrally and reaching with the left or right arm to targets presented in either the right or the left visual field. The PET images were registered with magnetic resonance images from each subject so that increases in rCBF could be localized with anatomical precision in individual subjects. Increased neural activity was examined in relation to the hand used to reach, irrespective of field of reach (hand effect), and the effects of target field of reach, irrespective of hand used (field effect). A separate analysis on intersubject, averaged PET data was also performed. A comparison of the results of the two analyses showed close correspondence in the areas of activation that were identified. We did not find a strict segregation of regions associated exclusively with either hand or field. Overall, significant rCBF increases in the hand and field conditions occurred bilaterally in the supplementary motor area, premotor cortex, cuneus, lingual gyrus, superior temporal cortex, insular cortex, thalamus, and putamen. Primary motor cortex, postcentral gyrus, and the superior parietal lobule (intraparietal sulcus) showed predominantly a contralateral hand effect, whereas the inferior parietal lobule showed this effect for the left hand only. Greater contralateral responses for the right hand were observed in the secondary motor areas. Only the anterior and posterior cingulate cortices exhibited strong ipsilateral hand effects. Field of reach was more commonly associated with bilateral patterns of activation in the areas with contralateral or ipsilateral hand effects. These results suggest that the visual and motor components of reaching may have a different functional organization and that many brain regions represent both limb of reach and field of reach. However, since posterior parietal cortex is connected with all of these regions, we suggest that it plays a crucial role in the integration of limb and field coordinates.
Article
In this review I discuss four neurological disorders that involve alterations of the self or self-awareness. These include the allen hand syndrome, asomatognosia, misidentification of the self in the mirror, and personal confabulation. In one way or another all of these inform us about the neurobiological basis of the self. It is suggested that mirror self-misidentification, asomatognosia, and personal confabulation are related syndromes. They are interpreted as examples of a perturbation in relatedness between the self and the environment related to delusional misidentification syndromes. The particular role of bilateral frontolimbic damage in producing disturbances of the self is discussed.
A series of recent anatomical and functional data has radically changed our view on the organization of the motor cortex in primates. In the present article we present this view and discuss its fundamental principles. The basic principles are the following: (a) the motor cortex, defined as the agranular frontal cortex, is formed by a mosaic of separate areas, each of which contains an independent body movement representation, (b) each motor area plays a specific role in motor control, based on the specificity of its cortical afferents and descending projections, (c) in analogy to the motor cortex, the posterior parietal cortex is formed by a multiplicity of areas, each of which is involved in the analysis of particular aspects of sensory information. There are no such things as multipurpose areas for space or body schema and (d) the parieto-frontal connections form a series of segregated anatomical circuits devoted to specific sensorimotor transformations. These circuits transform sensory information into action. They represent the basic functional units of the motor system. Although these conclusions mostly derive from monkey experiments, anatomical and brain-imaging evidence suggest that the organization of human motor cortex is based on the same principles. Possible homologies between the motor cortices of humans and non-human primates are discussed.
Article
Anosognosia (i.e., denial of hemiparesis) and asomatognosia (i.e., inability to recognize the affected limb as one's own) occur more frequently with right cerebral lesions. However, the incidence, relative recovery, and underlying mechanisms remain unclear. Anosognosia and asomatognosia were examined in 62 patients undergoing the intracarotid amobarbital procedure as part of their preoperative evaluation for epilepsy surgery. Additional questions were asked in the last 32 patients studied. During inactivation of the non-language-dominant cerebral hemisphere, 88% of the 62 patients were unaware of their paralysis, and 82% could not recognize their own hand at some point. Only 3% did not exhibit anosognosia or asomatognosia. In general, asomatognosia resolved earlier than anosognosia. When patients could not recognize their hand, they uniformly thought that it was someone else's hand. Dissociations in awareness were seen in the second series of 32 patients. Although 23 patients (72%) thought that both arms were in the air, 31% pointed to the correct position of the paralyzed arm on the table. Despite the inability of 24 of 32 patients (75%) to recognize their own hand, 21% of these patients were aware that their arm was weak, and 38% had correctly located their paralyzed arm on the angiography table. Anosognosia and asomatognosia are both common during acute dysfunction of the non-language-dominant cerebral hemisphere. Dissociations of perception of location, weakness, and ownership of the affected limb are frequent, as are misperceptions of location and body part identity. The dissociations suggest that multiple mechanisms are involved.
Article
When the apparent visual location of a body part conflicts with its veridical location, vision can dominate proprioception and kinesthesia. In this article, we show that vision can capture tactile localization. Participants discriminated the location of vibrotactile stimuli (upper, at the index finger, vs. lower, at the thumb), while ignoring distractor lights that could independently be upper or lower. Such tactile discriminations were slowed when the distractor light was incongruent with the tactile target (e.g., an upper light during lower touch) rather than congruent, especially when the lights appeared near the stimulated hand. The hands were occluded under a table, with all distractor lights above the table. The effect of the distractor lights increased when rubber hands were placed on the table, "holding" the distractor lights, but only when the rubber hands were spatially aligned with the participant's own hands. In this aligned situation, participants were more likely to report the illusion of feeling touch at the rubber hands. Such visual capture of touch appears cognitively impenetrable.
Article
Recognizing oneself, easy as it appears to be, seems at least to require awareness of one's body and one's actions. To investigate the contribution of these factors to self-recognition, we presented normal subjects with an image of both their own and the experimenter's hand. The hands could make the same, a different or no movement and could be displayed in various orientations. Subjects had to tell whether the indicated hand was theirs or not. The results showed that a congruence between visual signals and signals indicating the position of the body is one component on which self-recognition is based. Recognition of one's actions is another component. Subjects had most difficulty in recognizing their hand when movements were absent. When the two hands made different movements, subjects relied exclusively on the movement cue and recognition was almost perfect. Our findings are in line with pathological alterations in the sense of body and the sense of action.
Article
Using functional magnetic resonance imaging (fMRI) in humans, we identified regions of cortex involved in the encoding of limb position. Tactile stimulation of the right hand, across the body midline, activated the right parietal cortex when the eyes were closed; activation shifted to a left parietofrontal network when the eyes were open. These data reveal important similarities between human and non-human primates in the network of brain areas involved in the multisensory representation of limb position.
Article
The primary motor cortex (MI) is regarded as the site for motor control. Occasional reports that MI neurons react to sensory stimuli have either been ignored or attributed to guidance of voluntary movements. Here, we show that MI activation is necessary for the somatic perception of movement of our limbs. We made use of an illusion: when the wrist tendon of one hand is vibrated, it is perceived as the hand moving. If the vibrated hand has skin contact with the other hand, it is perceived as both hands bending. Using fMRI and TMS, we show that the activation in MI controlling the nonvibrated hand is compulsory for the somatic perception of the hand movement. This novel function of MI contrasts with its traditional role as the executive locus of voluntary limb movement.
  • G Rizzolatti
  • C Scandolara
  • M Matelli
  • M Gentilucci
G. Rizzolatti, C. Scandolara, M. Matelli, M. Gentilucci, Behav. Brain Res. 2, 147 (1981).
  • H H Ehrsson
H. H. Ehrsson, et al. J Neurophysiol. 83, 528 (2000).
  • M S Graziano
M. S. Graziano, Proc. Natl. Acad. Sci. U.S.A. 96, 10418 (1999).
  • H Sakata
  • Y Takaoka
  • A Kawarasaki
  • H Shibutani
H. Sakata, Y. Takaoka, A. Kawarasaki, H. Shibutani, Brain Res. 64, 85 (1973).
  • F Pavani
  • C Spence
  • J Driver
F. Pavani, C. Spence, J. Driver, Psychol. Sci. 11, 353 (2000).
  • D M Lloyd
  • D I Shore
  • C Spence
  • G A Calvert
D. M. Lloyd, D. I. Shore, C. Spence, G. A. Calvert, Nature Neurosci. 6, 17 (2003).
  • M Botvinick
  • J Cohen
M. Botvinick, J. Cohen, Nature 391, 756 (1998).
  • R W Mitchell
R. W. Mitchell, in The Self: From Soul to Brain, J. LeDoux, Ed. (New York Academy of Sciences, New York, 2003), pp. 39-62.
  • P B Johnson
  • S Ferraina
  • L Bianchi
  • R Caminiti
P. B. Johnson, S. Ferraina, L. Bianchi, R. Caminiti, Cereb. Cortex 6, 102 (1996).
  • M S Graziano
  • X T Hu
  • C G Gross
M. S. Graziano, X. T. Hu, C. G. Gross, J. Neurophysiol. 77, 2268 (1997).
  • Y Burnod
Y. Burnod et al., Exp. Brain Res. 129, 325 (1999).
Aston for their technical support and C. Frith for serving as a pre-reviewer on an earlier version of this manuscript
  • E We
  • P Featherstone
We thank E. Featherstone and P. Aston for their technical support and C. Frith for serving as a pre-reviewer on an earlier version of this manuscript.
  • R A Andersen
  • L H Snyder
  • D C Bradley
  • J Xing
R. A. Andersen, L. H. Snyder, D. C. Bradley, J. Xing, Annu. Rev. Neurosci. 20, 303 (1997).
  • C Kertzman
  • U Schwarz
  • T A Zeffiro
  • M Hallett
C. Kertzman, U. Schwarz, T. A. Zeffiro, M. Hallett, Exp. Brain Res. 114, 170 (1997).
  • M Critchley
M. Critchley, Mt. Sinai J. Med. 41, 82 (1974).
  • E Naito
  • P E Roland
  • H H Ehrsson
E. Naito, P. E. Roland, H. H. Ehrsson, Neuron 36, 979 (2002).
  • M S Graziano
  • D F Cooke
  • C S Taylor