Article

Tactile-visual integration in the posterior parietal cortex: A functional magnetic resonance imaging study

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

To explore the neural substrates of visual-tactile crossmodal integration during motion direction discrimination, we conducted functional magnetic resonance imaging with 15 subjects. We initially performed independent unimodal visual and tactile experiments involving motion direction matching tasks. Visual motion discrimination activated the occipital cortex bilaterally, extending to the posterior portion of the superior parietal lobule, and the dorsal and ventral premotor cortex. Tactile motion direction discrimination activated the bilateral parieto-premotor cortices. The left superior parietal lobule, intraparietal sulcus, bilateral premotor cortices and right cerebellum were activated during both visual and tactile motion discrimination. Tactile discrimination deactivated the visual cortex including the middle temporal/V5 area. To identify the crossmodal interference of the neural activities in both the unimodal and the multimodal areas, tactile and visual crossmodal experiments with event-related designs were also performed by the same subjects who performed crossmodal tactile-visual tasks or intramodal tactile-tactile and visual-visual matching tasks within the same session. The activities detected during intramodal tasks in the visual regions (including the middle temporal/V5 area) and the tactile regions were suppressed during crossmodal conditions compared with intramodal conditions. Within the polymodal areas, the left superior parietal lobule and the premotor areas were activated by crossmodal tasks. The left superior parietal lobule was more prominently activated under congruent event conditions than under incongruent conditions. These findings suggest that a reciprocal and competitive association between the unimodal and polymodal areas underlies the interaction between motion direction-related signals received simultaneously from different sensory modalities.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Somatosensory stimulus input provides various information. Previous studies have focused on the ability to discriminate various stimulus parameters, including frequency [1][2][3], time [4,5], intensity [6], space [5,7], pattern [8,9], and motion [10][11][12][13]. When discriminating stimuli motions in two directions, a previous study reported that the stimuli presented to the fingertip were discriminated beyond the chance level between the stimulus that moved parallel to the long axis of the finger (vertical) and that across the finger (horizontal) [10]. ...
... When discriminating stimuli motions in two directions, a previous study reported that the stimuli presented to the fingertip were discriminated beyond the chance level between the stimulus that moved parallel to the long axis of the finger (vertical) and that across the finger (horizontal) [10]. Previous studies using fMRI showed that there is significant activity in the primary somatosensory cortex (S1), secondary somatosensory cortex (S2), and inferior parietal cortex (IPC) during the task of discriminating the direction of movement of stimuli motion [11]. Thus, the activity in these cortical regions is possibly important in discriminating the moving direction of somatosensory stimulation. ...
... In addition to these core somatosensory cortical regions, a variety of other cortical regions are active during the discrimination task under MS [8,9,33]. Regarding the tactile direction discrimination used in our previous study, the right index and middle fingers were placed on separate rails, and the task was to discriminate between congruent or incongruent directions, in which pins moved on the rails; the results revealed that S1, S2, and IPC were active during this task [11]. In another study, a moving tactile stimulation was provided to both index fingers simultaneously to the right or left. ...
Article
Full-text available
Background The mechanical tactile stimulation, such as plastic pins and airflow-driven membrane, induces cortical activity. The cortical activity depends on the mechanical tactile stimulation pattern. Therefore, the stimulation pattern of mechanical tactile stimuli intervention may influence its effect on the somatosensory function. However, the effect of the mechanical tactile stimulation input pattern on the somatosensory function has not yet been investigated at the behavioral level. The present study aimed to clarify the effects of mechanical tactile stimuli intervention with different stimulation patterns on the ability to discriminate moving directions. Results Twenty healthy adults participated in the experiment. Three conditions were used for mechanical tactile stimuli intervention: (1) the whole stimulus surface was stimulated, (2) the stimulus moved within the stimulus surface, and (3) a no-stimulus condition. The effects of mechanical tactile stimuli intervention on tactile discrimination were evaluated using a simple reaction task and a choice reaction task to discriminate the movement direction. Reaction time, correct rate, and rate correct score were calculated to measure task performance. We examined the effects of mechanical tactile stimuli intervention on the ability to discriminate the moving direction for a certain period under three intervention conditions. The results showed that the mean reaction time during the simple reaction task did not differ significantly before and after the intervention under all intervention conditions. Similarly, we compared the data obtained before and after the intervention during the choice reaction task. Our results revealed that the mean reaction time and correct rate did not differ significantly under vertical and horizontal conditions. However, the rate correct score showed a significant improvement after the horizontal moving tactile stimulation intervention under both vertical and horizontal conditions. Conclusions Our results showed that the effect of mechanical tactile stimuli intervention on mechanical tactile stimulation moving direction discrimination function depended on the input pattern of mechanical tactile stimuli intervention. Our results suggest the potential therapeutic benefits of sustained tactile stimulation intervention. This study revealed that it is possible to change behavioral levels via mechanical tactile stimuli intervention as well as the potential of mechanical tactile stimuli intervention in the field of rehabilitation.
... Previous work has suggested that LPN is mediated by local generators in the posterior parietal cortex, and it is primarily observed in tasks including retrieval requirements for conjunctions of memory attributes (Johansson and Mecklinger, 2003). Single-unit and neuroimaging studies have shown that the ventral intraparietal area (Avillac et al., 2007) and the posterior parietal cortex (Amedi et al., 2002;Nakashita et al., 2008) are involved in visuo-tactile crossmodal associations and integrations. In such associations, the visual signal from the primary visual areas may activate the association network via the posterior parietal areas (Amedi et al., 2002;Nakashita et al., 2008). ...
... Single-unit and neuroimaging studies have shown that the ventral intraparietal area (Avillac et al., 2007) and the posterior parietal cortex (Amedi et al., 2002;Nakashita et al., 2008) are involved in visuo-tactile crossmodal associations and integrations. In such associations, the visual signal from the primary visual areas may activate the association network via the posterior parietal areas (Amedi et al., 2002;Nakashita et al., 2008). Our recent study has also shown that, single-pulse transcranial magnetic stimulation (spTMS) applied on human contralateral posterior parietal cortex (cPPC) at 600 ms after the onset of the first stimulus (S-1) significantly impairs participants' behavioral performance in a tactile-visual cross-modal working memory task (Ku et al., 2015). ...
... Furthermore, we found that in the crossmodal task, the distribution of learning-related activity was mostly located in posterior electrodes. This distribution is consistent with the finding in previous imaging studies of crossmodal associations and integrations (Amedi et al., 2002;Nakashita et al., 2008). ...
Article
Full-text available
Studies have indicated that a cortical sensory system is capable of processing information from different sensory modalities. However, it still remains unclear when and how a cortical system integrates and retains information across sensory modalities during learning. Here we investigated the neural dynamics underlying crossmodal associations and memory by recording event-related potentials (ERPs) when human participants performed visuo-tactile (crossmodal) and visuo-visual (unimodal) paired-associate (PA) learning tasks. In a trial of the tasks, the participants were required to explore and learn the relationship (paired or non-paired) between two successive stimuli. EEG recordings revealed dynamic ERP changes during participants' learning of paired-associations. Specifically, 1) the frontal N400 component showed learning-related changes in both unimodal and crossmodal tasks but did not show any significant difference between these two tasks, while the central P400 displayed both learning-changes and task-differences; 2) a late posterior negative slow wave (LPN) showed the learning effect only in the crossmodal task; 3) alpha-band oscillations appeared to be involved in crossmodal working memory. Additional behavioral experiments suggested that these ERP components were not relevant to the participants' familiarity with stimuli per se. Further, by shortening the delay length (from 1300 ms to 400 ms or 200 ms) between the first and second stimulus in the crossmodal task, declines in participants' task performance were observed accordingly. Taken together, these results provide insights into the cortical plasticity (induced by PA learning) of neural networks involved in crossmodal associations in working memory.
... A number of neuroimaging studies have examined the neural substrates involved in tactile motion processing [11][12][13][14][15][16][17][18] . These studies have indicated that tactile motion processing involves a distributed brain network including the postcentral gyrus, parietal operculum, posterior parietal lobule, lateral occipito-temporal cortex, and cerebellum. ...
... These studies have indicated that tactile motion processing involves a distributed brain network including the postcentral gyrus, parietal operculum, posterior parietal lobule, lateral occipito-temporal cortex, and cerebellum. However, as few studies have examined brain activity when participants are required to judge the direction and speed of a moving object 11,15,16 , the relative contributions of these regions to tactile motion processing remain unclear. Indeed, to the best of our knowledge, only one neuroimaging study has investigated the neural substrates of tactile speed processing 11 . ...
... The findings of the present study are consistent with those of a few neuroimaging studies in which participants were required to make judgments regarding the velocity of motion. In these studies, the lateral occipito-temporal cortex exhibited decreased or absent activation in sighted individuals 11,15,16 . However, it is somewhat puzzling that repetitive transcranial magnetic stimulation (rTMS) of the lateral occipito-temporal cortex was observed to interfere with tactual detection of changes in speed -a fact that remains inconsistent with neuroimaging findings 43 . ...
Article
Full-text available
Humans are able to judge the speed of an object's motion by touch. Research has suggested that tactile judgment of speed is influenced by physical properties of the moving object, though the neural mechanisms underlying this process remain poorly understood. In the present study, functional magnetic resonance imaging was used to investigate brain networks that may be involved in tactile speed classification and how such networks may be affected by an object's texture. Participants were asked to classify the speed of 2-D raised dot patterns passing under their right middle finger. Activity in the parietal operculum, insula, and inferior and superior frontal gyri was positively related to the motion speed of dot patterns. Activity in the postcentral gyrus and superior parietal lobule was sensitive to dot periodicity. Psycho-physiological interaction (PPI) analysis revealed that dot periodicity modulated functional connectivity between the parietal operculum (related to speed) and postcentral gyrus (related to dot periodicity). These results suggest that texture-sensitive activity in the primary somatosensory cortex and superior parietal lobule influences brain networks associated with tactually-extracted motion speed. Such effects may be related to the influence of surface texture on tactile speed judgment.
... Most recently, Summers and colleagues reported activation in contralateral SI, bilateral parietal operculum or secondary somatosensory cortex (SII), and bilateral posterior parietal cortex (PPC) in response to unilateral vibrotactile motion on the fingertip (Summers et al. 2009). Similar parietal activation patterns have been reported by others to brush strokes (Polonara et al. 1999;Hagen et al. 2002;Yoo et al. 2003;Maihöfner et al. 2004), airflow (Bremmer et al. 2001), and moving patterns (Burton et al. 1997;Nakashita et al. 2008). Selective attention tasks have shown that SII and PPC process higher-level features of tactile stimuli; SII is preferentially activated by stimulus texture and duration, whereas PPC is preferentially activated by stimulus shape (O'Sullivan et al. 1994;Ledberg et al. 1995;Roland et al. 1998;Burton et al. 1999;Bodegård et al. 2001;Stilla and Sathian 2008). ...
... Several neuroimaging studies have reported MT? activation in response to auditory motion (Berman and Colby 2002;Poirier et al. 2005;Poirier et al. 2006), while an equal number have either failed to observe MT? activation (Smith et al. 2004;Smith et al. 2007) or reported negative activation (Lewis et al. 2000). The results of tactile motion studies are equally inconsistent, with some groups reporting MT? activation (Hagen et al. 2002;Blake et al. 2004;Ricciardi et al. 2007;Summers et al. 2009) and others reporting no activation (Bremmer et al. 2001) or negative activation (Bodegård et al. 2000;Nakashita et al. 2008). If MT? is responsive to tactile motion, then it may also mediate the tMAE. ...
... In addition, the SII (and absence of PPC) activation supports previous research indicating that SII is preferentially activated by tactile texture, or microgeometry, whereas the PPC is preferentially activated by shape, or macrogeometry (O'Sullivan et al. 1994;Ledberg et al. 1995;Roland et al. 1998;Burton et al. 1999;Bodegård et al. 2001;Stilla and Sathian 2008). PPC activation has been reported when attention to the macrogeometric features of a moving tactile stimulus is required (Kitada et al. 2003;Nakashita et al. 2008). For example, Kitada and colleagues showed that the PPC was activated when moving tactile stimuli presented to two fingers on the right hand required integration compared to when they did not (Kitada et al. 2003). ...
Article
Full-text available
The tactile motion aftereffect (tMAE) is a perceptual illusion in which a stationary stimulus feels as though it is moving when presented following adaptation to a unidirectionally moving tactile stimulus. Using functional magnetic resonance imaging (fMRI), we localized the brain areas responsive to tactile motion and then investigated whether these areas underlie the tMAE. Tactile stimulation was delivered to the glabrous surface of the right hand by means of a plastic cylinder with a square-wave patterned surface. In the tactile motion localizer, we contrasted periods in which the cylinder rotated at 15 rpm with periods of rest (stationary contact). Activation was observed in the contralateral (left) thalamus, postcentral gyrus, and parietal operculum. In the tMAE experiment, the cylinder rotated at 15 or 60 rpm for 2 min. The 60-rpm speed induced reliable tMAEs, whereas the 15-rpm speed did not. Of the areas activated by the tactile motion localizer, only the postcentral gyrus showed a sustained fMRI response following the offset of 60-rpm (but not 15-rpm) stimulation, presumably reflecting the illusory perception of motion.
... Visuo-haptic convergence (VC > M ∩ HC > M) was found for control objects in all participants in the left LOC and SPL. The convergence of visual and haptic information in the SPL is consistent with reports suggesting the SPL or the posterior parietal cortex as brain regions where visual and tactile information converge (Makin et al., 2008;Nakashita et al., 2008). However, the MAX criterion was not met in the left SPL for any of the groups or object categories, i.e., bimodal stimulation did not lead to higher fMRI activations in this region compared to the unimodal conditions. ...
... However, the MAX criterion was not met in the left SPL for any of the groups or object categories, i.e., bimodal stimulation did not lead to higher fMRI activations in this region compared to the unimodal conditions. This looks at first surprising given the well-known role of the parietal cortex in multisensory integration (Nakashita et al., 2008). However, as our task was associated rather with shape than with motion perception, the finding that visuo-haptic integration engages predominantly the left LOC, a crossmodal object recognition region, but not the SPL, which is a part of the dorsal visual stream, looks appropriate. ...
... Numerous demonstrations of reliable cross-modal matching in experiments utilizing a large number of dimensional permutations suggest a common coin that allows for a precise and replicable matching of the psychological impacts of cross-modal stimulation. Recent studies employing neuroimaging (Nakashita et al., 2008) and electrophysiological (Giard and Peronnet, 1999; Wang et al., 2008 ) methods demonstrate neural interactions that may provide the underlying mechanism for such cross-modal neural interactions. Furthermore, two critical structures in the neurocircuitry of depression, namely the amygdala and the prefrontal cortex, are especially wired to coprocess inputs from several modalities (Nahm et al., 1993; Cooper et al., 1994; Gilbert et al., 1996; Day et al., 2005; Ghashghaei et al., 2007; Tettamanti et al., 2012). ...
... Furthermore, two critical structures in the neurocircuitry of depression, namely the amygdala and the prefrontal cortex, are especially wired to coprocess inputs from several modalities (Nahm et al., 1993; Cooper et al., 1994; Gilbert et al., 1996; Day et al., 2005; Ghashghaei et al., 2007; Tettamanti et al., 2012). Imaging (Nakashita et al., 2008) and electrophysiological (Giard and Peronnet, 1999; Wang et al., 2008) studies demonstrate neural interactions that may provide the underlying mechanism for cross-modal neural interactions. Related studies have shown that primary sensory cortical areas respond not just to the appropriate unimodal but also to multimodal stimuli. ...
Article
Full-text available
Depression involves a dysfunction in an affective fronto-limbic circuitry including the prefrontal cortices, several limbic structures including the cingulate cortex, the amygdala, and the hippocampus as well as the basal ganglia. A major emphasis of research on the etiology and treatment of mood disorders has been to assess the impact of centrally generated (top-down) processes impacting the affective fronto-limbic circuitry. The present review shows that peripheral (bottom-up) unipolar stimulation via the visual and the auditory modalities as well as by physical exercise modulates mood and depressive symptoms in humans and animals and activates the same central affective neurocircuitry involved in depression. It is proposed that the amygdala serves as a gateway by articulating the mood regulatory sensorimotor stimulation with the central affective circuitry by emotionally labeling and mediating the storage of such emotional events in long-term memory. Since both amelioration and aggravation of mood is shown to be possible by unipolar stimulation, the review suggests that a psychophysical assessment of mood modulation by multimodal stimulation may uncover mood ameliorative synergisms and serve as adjunctive treatment for depression. Thus, the integrative review not only emphasizes the relevance of investigating the optimal levels of mood regulatory sensorimotor stimulation, but also provides a conceptual springboard for related future research.
... Visuo-haptic convergence (VC N M ∩ HC N M) was found for control objects in all participants in the left LOC and SPL. The convergence of visual and haptic information in the SPL is consistent with reports suggesting the SPL or the posterior parietal cortex as brain regions where visual and tactile information converge (Makin et al., 2008; Nakashita et al., 2008). However, the MAX criterion was not met in the left SPL for any of the groups or object categories, i.e., bimodal stimulation did not lead to higher fMRI activations in this region compared to the unimodal conditions. ...
... However, the MAX criterion was not met in the left SPL for any of the groups or object categories, i.e., bimodal stimulation did not lead to higher fMRI activations in this region compared to the unimodal conditions. This looks at first surprising given the well-known role of the parietal cortex in multisensory integration (Nakashita et al., 2008). However, as our task was associated rather with shape than with motion perception, the finding that visuo-haptic integration engages predominantly the left LOC, a crossmodal object recognition region, but not the SPL, which is a part of the dorsal visual stream, looks appropriate. ...
Article
Human neuroplasticity of multisensory integration has been studied mainly in the context of natural or artificial training situations in healthy subjects. However, regular smokers also offer the opportunity to assess the impact of intensive daily multisensory interactions with smoking-related objects on the neural correlates of crossmodal object processing. The present functional magnetic resonance imaging study revealed that smokers show a comparable visuo-haptic integration pattern for both smoking paraphernalia and control objects in the left lateral occipital complex, a region playing a crucial role in crossmodal object recognition. Moreover, the degree of nicotine dependence correlated positively with the magnitude of visuo-haptic integration in the left lateral occipital complex (LOC) for smoking-associated but not for control objects. In contrast, in the left LOC non-smokers displayed a visuo-haptic integration pattern for control objects, but not for smoking paraphernalia. This suggests that prolonged smoking-related multisensory experiences in smokers facilitate the merging of visual and haptic inputs in the lateral occipital complex for the respective stimuli. Studying clinical populations who engage in compulsive activities may represent an ecologically valid approach to investigating the neuroplasticity of multisensory integration.
... (11,12) İntraparietal sulkus ve süperior frontal korteks gibi bölgeler de bu farklı duyusal sinyallerin entegrasyonunda rol oynamaktadır. (13,14) Bu nedenle, gıda ile oral uyarım, belki de yaşayabileceğimiz en zengin multimodal duyusal deneyimdir. (6) Bu birleştirilmiş duyumlar, yiyeceğin alınması veya reddedilmesi kararını kolaylaştırmak için, yiyecekle ilgili karmaşık bir değerlendirme yapılmasına olanak vererek tadın başka bir boyutu ile, sevilmesi veya lezzetliliği ile bütünleşmiştir. ...
Chapter
Full-text available
Bir gıdanın tadını beğenmenin vücudumuza ne gibi faydaları olabilir? Tat duyusu, ağız içindeki besinlerin spesifik tat reseptörlerini aktive etmesiyle ortaya çıkar ve besinin yapısı, sıcaklığı ve kokusu ile birlikte algılanır. Tat duyusunun esas olarak iki işlevi bulunmaktadır. Birincisi gıdaların besin değeri ve toksisite açısından değerlen-dirilmesini sağlayarak ne yiyeceğimize karar vermemize yardım eder ikincisi ise bedenimizi yenen gıdaları metabolize etmeye hazır hale getirir. Tat duyusu doğadaki tüm canlılar için en hayati duyu organlarından biridir. Yaşamın sürdürebilmesi için öncelikli ihtiyaçlardan biri yeterli ve dengeli beslenmedir. Beslenmede ilk aşama besine karşı tüketme isteğinin duyulması ve besinin tüketilmesidir. Tat duyusu besinlere karşı tüketme isteğini artırır ve ne yiyeceğimize karar vermemize yardımcı olurken, zehirli olma potansiyeli bulunan gıdaların alımını önler. (1) Hatalı besin seçimleri ilk insanlar için hem düşük enerji içerikli besinlerin aranması ve toplanmasına harcanan enerji israfı hem de bu besinlerin metabolizmaya olası toksik etkileri açısından risk oluşturmaktaydı. Günümüzde toplumsal hayattaki değişimler ve güvenli besinlere kolay ulaşılabilmesi insanların tat duyusuna bakış açısını da değiştirmiştir. Hazır perakende gıdalara süper marketlerden kolayca ulaşabilen günümüz insanı için tat duyusu, besinlerin güvenilirliği ve besin değerinin tespiti için kullanılan bir duyudan çok, haz veren gıdaları seçmemize yarayan bir duyuya dönüşmüştür. Yemek yemek gelişmiş toplum insanları için hayatta kalmak amacıyla yapılan bir eylem olmaktan çıkıp hayattan daha fazla zevk almak için yapılan bir eylem haline dönüşmüştür. Geri kalmış toplumlarda hayatta kalmak için beslenmeye çalışan ve gelişmiş toplumlarda daha fazla haz duymak için beslenen insanların yaşadığı bir dünyada tat duyu-suna bakış açısı farklılıkları bilimsel dünyayı da etkilemiş ve bu konuda yapılan çalışmalar farklı alanlara yayılmıştır. Gastronomi ve aşçılık ile ilgilenen bilim insanları tat duyusuna hangi besinlerin beraber tüketilmesi veya nasıl pişirilmesi gerektiği konusunda, gıda mühendisleri gıdaların besin değeri, saklanma koşulları ve tadının muhafaza edilmesi ve her geçen gün nüfusu artan insanlığın besin bulma sorunlarına çözümler araştırırken, tıp dünyasında obezite ve sistemik hastalıkların önlenmesinde tat duyusunun rolü üzerine yapılan çalışmalar ön plana çıkmaktadır.
... In addition, the SPL is also involved in memory processes, particularly in cross-modal conditions. First, the involvement of SPL in crossmodal integration has been extensively demonstrated (Molholm et al., 2006;Nakashita et al., 2008;Williams et al., 2015). Second, it shows stronger activation in the cross-modal working memory retrieval (Zhang et al., 2014) and cross-modal temporal order memory (Zhang et al., 2004) than in the unimodal conditions. ...
Article
Full-text available
Cross‐modal prediction serves a crucial adaptive role in the multisensory world, yet the neural mechanisms underlying this prediction are poorly understood. The present study addressed this important question by combining a novel audiovisual sequence memory task, functional magnetic resonance imaging (fMRI), and multivariate neural representational analyses. Our behavioral results revealed a reliable asymmetric cross‐modal predictive effect, with a stronger prediction from visual to auditory (VA) modality than auditory to visual (AV) modality. Mirroring the behavioral pattern, we found the superior parietal lobe (SPL) showed higher pattern similarity for VA than AV pairs, and the strength of the predictive coding in the SPL was positively correlated with the behavioral predictive effect in the VA condition. Representational connectivity analyses further revealed that the SPL mediated the neural pathway from the visual to the auditory cortex in the VA condition but was not involved in the auditory to visual cortex pathway in the AV condition. Direct neural pathways within the unimodal regions were found for the visual‐to‐visual and auditory‐to‐auditory predictions. Together, these results provide novel insights into the neural mechanisms underlying cross‐modal sequence prediction. Visual–auditory (VA) prediction was stronger than auditory–visual (AV) prediction. Superior parietal lobe (SPL) shows higher neural similarity for VA than AV prediction. Indirect pathway from the visual to auditory cortex via SPL underlies VA prediction. In contrast, direct connectivity within modality‐specific areas supports within‐modal predictions.
... More specifically, the superior parieto-occipital cortex (SPOC), located in the posterior part of the SPL just anterior to the parieto-occipital sulcus, is involved in arm actions implicated in reaching (Cavina-Pratesi et al. 2010;de Jong et al. 2001;Filimon et al. 2009;Quinlan and Culham 2007) and pointing (Connolly et al. 2003). Brodmann area 5 (BA5) on the other hand, situated at the most anterior part of the SPL, directly posterior to the primary somatosensory cortex, unsurprisingly has been associated with tactile discrimination (Nakashita et al. 2008;Stoeckel et al. 2004), the control of hand movements (Grafton et al. 1996;Kalaska et al. 1990), fine-motor control as well as movement imaging of finger actions (Hanakawa et al. 2003). Similar to BA5, the anterior part of the intraparietal sulcus (aIPS) is presumed to play a specialized role in handgrip, using an objects visual characteristics to guide hand movements, although it is involved in reaching too (Culham et al. 2003;Konen et al. 2013;Orban 2016;Rice et al. 2006;Verhagen et al. 2012). ...
Article
Full-text available
Dual-site transcranial magnetic stimulation (ds-TMS) is well suited to investigate the causal effect of distant brain regions on the primary motor cortex, both at rest and during motor performance and learning. However, given the broad set of stimulation parameters, clarity about which parameters are most effective for identifying particular interactions is lacking. Here, evidence describing inter-and intra-hemispheric interactions during rest and in the context of motor tasks is reviewed. Our aims are threefold: (1) provide a detailed overview of ds-TMS literature regarding inter-and intra-hemispheric connectivity; (2) describe the applicability and contributions of these interactions to motor control, and; (3) discuss the practical implications and future directions. Of the 3659 studies screened, 109 were included and discussed. Overall, there is remarkable variability in the experimental context for assessing ds-TMS interactions, as well as in the use and reporting of stimulation parameters, hindering a quantitative comparison of results across studies. Further studies examining ds-TMS interactions in a systematic manner, and in which all critical parameters are carefully reported, are needed.
... Cross-modal information processing includes both cross-modal information integration and cross-modal conflict processing (Saito et al., 2003;Nakashita et al., 2008). Cross-modal conflict processing specifically concerns processes that efficiently inhibit a signal in order to resolve an interference between modalities (Wang et al., 2011). ...
Article
Full-text available
This experiment used event-related potentials (ERPs) to study the tactile-visual information conflict processing in a tactile-visual pairing task and its modulation by tactile-induced emotional states. Eighteen participants were asked to indicate whether the tactile sensation on their body matched or did not match the expected tactile sensation associated with the object depicted in an image. The type of tactile-visual stimuli (matched vs. mismatched) and the valence of tactile-induced emotional states (positive vs. negative) were manipulated following a 2 × 2 factorial design. Electrophysiological analyses revealed a mismatched minus matched negative difference component between 420 and 620 ms after stimulus onset in the negative tactile-induced emotional state condition. This ND420-620 component was considered as a sign of the cross-modal conflict processing during the processing of incongruent tactile-visual information. In contrast, no significant mismatched minus matched negative difference component was found in the positive tactile-induced emotional state condition. Together, these results support the hypothesis that a positive emotional state induced by a positive tactile stimulation improves tactile-visual conflict processing abilities.
... Thus, activation in these regions may be partly associated with such heuristics. This speculation is consistent with the perspective that the anterior insula and inferior frontal gyrus are associated with covert articulation 40 , as well as with previous findings that this region is also sensitive to matching between picture and words that do not contain sound symbolism [41][42][43] . The congruency between the impression of softness from covertly-produced sounds and tactile information may become more salient, leading to activation in the insula and the medial areas in the superior frontal gyrus. ...
Article
Full-text available
Unlike the assumption of modern linguistics, there is non-arbitrary association between sound and meaning in sound symbolic words. Neuroimaging studies have suggested the unique contribution of the superior temporal sulcus to the processing of sound symbolism. However, because these findings are limited to the mapping between sound symbolism and visually presented objects, the processing of sound symbolic information may also involve the sensory-modality dependent mechanisms. Here, we conducted a functional magnetic resonance imaging experiment to test whether the brain regions engaged in the tactile processing of object properties are also involved in mapping sound symbolic information with tactually perceived object properties. Thirty-two healthy subjects conducted a matching task in which they judged the congruency between softness perceived by touch and softness associated with sound symbolic words. Congruency effect was observed in the orbitofrontal cortex, inferior frontal gyrus, insula, medial superior frontal gyrus, cingulate gyrus, and cerebellum. This effect in the insula and medial superior frontal gyri was overlapped with softness-related activity that was separately measured in the same subjects in the tactile experiment. These results indicate that the insula and medial superior frontal gyrus play a role in processing sound symbolic information and relating it to the tactile softness information.
... A study on nonhuman primates also found neurons that exhibit signs of multisensory interactions with super-additive or sub-additive response summation located in the superior temporal region and spatially clustered [67]. In addition, several studies using functional magnetic resonance imaging (fMRI) in humans and nonhumans identified regions in the premotor cortex [31,68] or posterior parietal cortex [69] with clusters of neurons also exhibiting polysensory responses, which might be involved in the integration of multimodal information about action. Furthermore, in animal studies, parts of the midbrain, i.e., deeper laminae of the superior colliculus, have been found to contain multisensory neurons of all possible combinations [48,70]. ...
Article
Full-text available
Optimal motor control requires the effective integration of multi-modal information. Visual information of movement performed by others even enhances potentials in the upper motor neurons through the mirror-neuron system. On the other hand, it is known that motor control is intimately associated with afferent proprioceptive information. Kinaesthetic information is also generated by passive, external-driven movements. In the context of sensory integration, it is an important question how such passive kinaesthetic information and visually perceived movements are integrated. We studied the effects of visual and kinaesthetic information in combination, as well as isolated, on sensorimotor integration, compared to a control condition. For this, we measured the change in the excitability of the motor cortex (M1) using low-intensity Transcranial magnetic stimulation (TMS). We hypothesised that both visual motoneurons and kinaesthetic motoneurons enhance the excitability of motor responses. We found that passive wrist movements increase the motor excitability, suggesting that kinaesthetic motoneurons do exist. The kinaesthetic influence on the motor threshold was even stronger than the visual information. Moreover, the simultaneous visual and passive kinaesthetic information increased the cortical excitability more than each of them independently. Thus, for the first time, we found evidence for the integration of passive kinaesthetic- and visual-sensory stimuli.
... A study on nonhuman primates also found neurons that exhibit signs of multisensory interactions with super-additive or sub-additive response summation located in the superior temporal region and spatially clustered [67]. In addition, several studies using functional magnetic resonance imaging (fMRI) in humans and nonhumans identified regions in the premotor cortex [31], [68] or posterior parietal cortex [69] with clusters of neurons exhibiting also polysensory responses which might be involved in the integration of multimodal information about action. Furthermore, in animal studies, parts of the midbrain, i.e. deeper laminae of the superior colliculus, have been found to contain multisensory neurons of all possible . ...
Preprint
Full-text available
Optimal motor control requires the effective integration of multi-modal information. Visual information of movement performed by others even enhances potentials in the upper motor neurons, through the mirror-neuron system. On the other hand, it is known that motor control is intimately associated with afferent proprioceptive information. Kinaesthetic information is also generated by passive, external-driven movements. In the context of sensory integration, its an important question, how such passive kinaesthetic information and visually perceived movements are integrated. We studied the effects of visual and kinaesthetic information in combination, as well as isolated, on sensorimotor-integration -- compared to a control condition. For this, we measured the change in the excitability of motor cortex (M1) using low-intensity TMS. We hypothesised that both visual motoneurons and kinaesthetic motoneurons could enhance the excitability of motor responses. We found that passive wrist movements increase the motor excitability, suggesting that kinaesthetic motoneurons do exist. The kinaesthetic influence on the motor threshold was even stronger than the visual information. Moreover, the simultaneous visual and passive kinaesthetic information increased the cortical excitability more than each of them independently. Thus, for the first time, we found evidence for the integration of passive kinaesthetic- and visual-sensory stimuli.
... The second phenomenon involves the task-related deactivation of associated areas that belong to an irrelevant sensory modality. For example, a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 deactivation of the visual cortex occurs during somatosensory (tactile) discrimination tasks [9][10][11][12]. The third involves the "blood steal" phenomenon. ...
Article
Full-text available
The present study employed functional magnetic resonance imaging (fMRI) to examine the characteristics of negative blood oxygen level-dependent (Negative BOLD) signals during motor execution. Subjects repeated extension and flexion of one of the following: the right hand, left hand, right ankle, or left ankle. Negative BOLD responses during hand movements were observed in the ipsilateral hemisphere of the hand primary sensorimotor area (SMI), medial frontal gyrus (MeFG), middle frontal gyrus (MFG), and superior frontal gyrus (SFG). Negative BOLD responses during foot movements were also noted in the bilateral hand SMI, MeFG, MFG, SFG, inferior frontal gyrus, middle temporal gyrus, parahippocampal gyrus, anterior cingulate cortex, cingulate gyrus (CG), fusiform gyrus, and precuneus. A conjunction analysis showed that portions of the MeFG and CG involving similar regions to those of the default mode network were commonly deactivated during voluntary movements of the right/left hand or foot. The present results suggest that three mechanisms are involved in the Negative BOLD responses observed during voluntary movements: (1) transcallosal inhibition from the contralateral to ipsilateral hemisphere in the SMI, (2) the deactivated neural network with several brain regions, and (3) the default mode network in the MeFG and CG.
... These data are consistent with results shown by other authors who utilized the same task in S1 5,6,40 and V1 45 . We also found extensive neuronal firing rate modulations in the ACC and PPC, which demonstrate the clear involvement of these cortical areas in tactile discrimination according to electrophysiological 35,46,47 and functional imaging studies 48,49 . The presence of marked modulations in the neuronal firing rate in all task phases (anticipation, discrimination, response and reward) also suggest that even such a simple tactile discrimination task requires the involvement of vast cortical circuits to be performed properly. ...
Article
Full-text available
Processing of tactile sensory information in rodents is critically dependent on the communication between the primary somatosensory cortex (S1) and higher-order integrative cortical areas. Here, we have simultaneously characterized single-unit activity and local field potential (LFP) dynamics in the S1, primary visual cortex (V1), anterior cingulate cortex (ACC), posterior parietal cortex (PPC), while freely moving rats performed an active tactile discrimination task. Simultaneous single unit recordings from all these cortical regions revealed statistically significant neuronal firing rate modulations during all task phases (anticipatory, discrimination, response, and reward). Meanwhile, phase analysis of pairwise LFP recordings revealed the occurrence of long-range synchronization across the sampled fronto-parieto-occipital cortical areas during tactile sampling. Causal analysis of the same pairwise recorded LFPs demonstrated the occurrence of complex dynamic interactions between cortical areas throughout the fronto-parietal-occipital loop. These interactions changed significantly between cortical regions as a function of frequencies (i.e. beta, theta and gamma) and according to the different phases of the behavioral task. Overall, these findings indicate that active tactile discrimination by rats is characterized by much more widespread and dynamic complex interactions within the fronto-parieto-occipital cortex than previously anticipated.
... More specifically, the superior parieto-occipital cortex (SPOC), located in the posterior part of the SPL just anterior to the parieto-occipital sulcus, is involved in arm actions implicated in reaching (Cavina-Pratesi et al. 2010;de Jong et al. 2001;Filimon et al. 2009;Quinlan and Culham 2007) and pointing (Connolly et al. 2003). Brodmann area 5 (BA5) on the other hand, situated at the most anterior part of the SPL, directly posterior to the primary somatosensory cortex, unsurprisingly has been associated with tactile discrimination (Nakashita et al. 2008;Stoeckel et al. 2004), the control of hand movements (Grafton et al. 1996;Kalaska et al. 1990), fine-motor control as well as movement imaging of finger actions (Hanakawa et al. 2003). Similar to BA5, the anterior part of the intraparietal sulcus (aIPS) is presumed to play a specialized role in handgrip, using an objects visual characteristics to guide hand movements, although it is involved in reaching too (Culham et al. 2003;Konen et al. 2013;Orban 2016;Rice et al. 2006;Verhagen et al. 2012). ...
Article
Sexual behavior plays a fundamental role for reproduction in mammals and other animal species. It is characterized by an anticipatory and a consummatory phase, and several copulatory parameters have been identified in each phase, mainly in rats. Sexual behavior varies significantly across rats even when they are of the same strain and reared under identical conditions. This review shows that rats of the same strain selectively bred for showing a divergent behavioral trait when exposed to stress or novelty (i.e. Roman high and low avoidance rats, bred for their different avoidance response to the shuttle box, and high and low novelty exploration responders rats, bred for their different exploratory response to a novel environment) or a spontaneous behavior with divergent frequency (i.e. low and high yawning frequency rats, bred for their divergent yawning frequency) show similar differences in sexual behavior, mainly in copulatory pattern, but also in sexual motivation. As shown by behavioral pharmacology and intracerebral microdialysis experiments carried out mainly in Roman rats, these sexual differences may be due to a more robust dopaminergic tone present in the mesocorticolimbic dopaminergic system of one of the two sub-lines (e.g. high avoidance, high novelty exploration, and low yawning rat sub-lines). Thus, differences in genotype and/or in prenatal/postnatal environment lead not only to individual differences in temperament and environmental/emotional reactivity but also in sexual behavior. Because of the highly conserved mechanisms controlling reproduction in mammals, this may occur not only in rats but also in humans.
... Second, and more importantly, both the mirror and sensors provided visual feedback, whereas palpation and a tape provide tactile feedback. Because visual motion detection is processed differently than tactile motion detection [32], we chose to compare the sensor feedback to feedback from a mirror. Healthy subjects and patients with CLBP were equally capable of improving lumbopelvic movement control. ...
Article
Full-text available
Abstract Background Improving movement control can be an important treatment goal for patients with chronic low back pain (CLBP). Although external feedback is essential when learning new movement skills, many aspects of feedback provision in patients with CLBP remain currently unexplored. New rehabilitation technologies, such as movement sensors, are able to provide reliable and accurate feedback. As such, they might be more effective than conventional feedback for improving movement control. The aims of this study were (1) to assess whether sensor-based feedback is more effective to improve lumbopelvic movement control compared to feedback from a mirror or no feedback in patients with chronic low back pain (CLBP), and (2) to evaluate whether patients with CLBP are equally capable of improving lumbopelvic movement control compared to healthy persons. Methods Fifty-four healthy participants and 54 patients with chronic non-specific LBP were recruited. Both participant groups were randomised into three subgroups. During a single exercise session, subgroups practised a lumbopelvic movement control task while receiving a different type of feedback, i.e. feedback from movement sensors, from a mirror or no feedback (=control group). Kinematic measurements of the lumbar spine and hip were obtained at baseline, during and immediately after the intervention to evaluate the improvements in movement control on the practised task (assessment of performance) and on a transfer task (assessment of motor learning). Results Sensor-based feedback was more effective than feedback from a mirror (p
... However, studies during the last few decades provide evidence that the nature of our sensory experience is mostly multisensory. Numerous studies have demonstrated that visuo-tactile, audio-tactile, and audiovisual integration taking place parallel at numerous levels along brain pathways also produce behavioral responses (Foxe et al., 2002;Hötting et al., 2003;Taylor-Clarke et al., 2004;Budinger et al., 2006;Gillmeister and Eimer, 2007;Nakashita et al., 2008;Mahoney et al., 2014;Staines et al., 2014;Kwon et al., 2017). ...
Article
Full-text available
Based on a brief overview of the various aspects of schizophrenia reported by numerous studies, here we hypothesize that schizophrenia may originate (and in part be performed) from visual areas. In other words, it seems that a normal visual system or at least an evanescent visual perception may be an essential prerequisite for the development of schizophrenia as well as of various types of hallucinations. Our study focuses on auditory and visual hallucinations, as they are the most prominent features of schizophrenic hallucinations (and also the most studied types of hallucinations). Here, we evaluate the possible key role of the visual system in the development of schizophrenia.
... With regard to the brain areas identified by microstate segmentation method, predominant neural activity was observed in the MI. Previous studies have reported that SI, SMA, IPL, secondary sensory cortex (SII), PMA, and others, contralateral to the limb used, showed neural activity when identifying the direction of physical movement based on passive sensorimotor information (Nakashita et al., 2008). Among them, the MI, which showed particularly high neural activity, is an important brain area involved in somatosensory perception, especially in processing kinesthetic information from the muscle spindles, and motion perception (Naito, 2004). ...
Article
Full-text available
Background The association between motor imagery ability and brain neural activity that leads to the manifestation of a motor illusion remains unclear. Objective In this study, we examined the association between the ability to generate motor imagery and brain neural activity leading to the induction of a motor illusion by vibratory stimulation. Methods The sample consisted of 20 healthy individuals who did not have movement or sensory disorders. We measured the time between the starting and ending points of a motor illusion (the time to illusion induction, TII) and performed electroencephalography (EEG). We conducted a temporo-spatial analysis on brain activity leading to the induction of motor illusions using the EEG microstate segmentation method. Additionally, we assessed the ability to generate motor imagery using the Japanese version of the Movement Imagery Questionnaire-Revised (JMIQ-R) prior to performing the task and examined the associations among brain neural activity levels as identified by microstate segmentation method, TII, and the JMIQ-R scores. Results The results showed four typical microstates during TII and significantly higher neural activity in the ventrolateral prefrontal cortex, primary sensorimotor area, supplementary motor area (SMA), and inferior parietal lobule (IPL). Moreover, there were significant negative correlations between the neural activity of the primary motor cortex (MI), SMA, IPL, and TII, and a significant positive correlation between the neural activity of the SMA and the JMIQ-R scores. Conclusion These findings suggest the possibility that a neural network primarily comprised of the neural activity of SMA and M1, which are involved in generating motor imagery, may be the neural basis for inducing motor illusions. This may aid in creating a new approach to neurorehabilitation that enables a more robust reorganization of the neural base for patients with brain dysfunction with a motor function disorder.
... Among these multisensory integrations, visuo-tactile multisensory integration is known to be an important technique for use in the field of behavioral neuroscience and rehabilitation (Banati et al., 2000;Haggard et al., 2007;Kim and James, 2010;Gentile et al., 2011;Mahoney et al., 2014). Therefore, many studies have investigated the evidence for tactile and visual interactive responses to activation of various brain regions (Banati et al., 2000;Nakashita et al., 2008;Gentile et al., 2011;Martinez-Jauand et al., 2012;Schaefer et al., 2012). However, few studies have reported on the effects of visuo-tactile multisensory integration on the amount of brain activation on the somatosensory cortical regions (Kim and James, 2010;Gentile et al., 2011). ...
Article
Full-text available
Many studies have investigated the evidence for tactile and visual interactive responses to activation of various brain regions. However, few studies have reported on the effects of visuo-tactile multisensory integration on the amount of brain activation on the somatosensory cortical regions. The aim of this study was to examine whether coincidental information obtained by tactile stimulation can affect the somatosensory cortical activation using functional MRI. Ten right-handed healthy subjects were recruited for this study. Two tasks (tactile stimulation and visuotactile stimulation) were performed using a block paradigm during fMRI scanning. In the tactile stimulation task, in subjects with eyes closed, tactile stimulation was applied on the dorsum of the right hand, corresponding to the proximal to distal directions, using a rubber brush. In the visuotactile stimulation task, tactile stimulation was applied to observe the attached mirror in the MRI chamber reflecting their hands being touched with the brush. In the result of SPM group analysis, we found brain activation on the somatosensory cortical area. Tactile stimulation task induced brain activations in the left primary sensory-motor cortex (SM1) and secondary somatosensory cortex (S2). In the visuo-tactile stimulation task, brain activations were observed in the both SM1, both S2, and right posterior parietal cortex. In all tasks, the peak activation was detected in the contralateral SM1. We examined the effects of visuo-tactile multisensory integration on the SM1 and found that visual information during tactile stimulation could enhance activations on SM1 compared to the tactile unisensory stimulation.
... Of all the links, the bilateral parieto-occipital connectivity strength increased the most when compared to the None condition. The parietal cortices are involved with early visual signal processing [43] and visual attention feedback [44]. The letters of the N-Back task are encoded as both visuospatial (right hemisphere) and verbal (left hemisphere) objects. ...
Article
Objective: Synchronization in activated regions of cortical networks affect the brain's frequency response, which has been associated with a wide range of states and abilities, including memory. A non-invasive method for manipulating cortical synchronization is binaural beats. Binaural beats take advantage of the brain's response to two pure tones, delivered independently to each ear, when those tones have a small frequency mismatch. The mismatch between the tones is interpreted as a beat frequency, which may act to synchronize cortical oscillations. Neural synchrony is particularly important for working memory processes, the system controlling online organization and retention of information for successful goal-directed behavior. Therefore, manipulation of synchrony via binaural beats provides a unique window into working memory and associated connectivity of cortical networks. Approach: In this study, we examined the effects of different acoustic stimulation conditions during an N-back working memory task, and we measure participant response accuracy and cortical network topology via EEG recordings. Six acoustic stimulation conditions were used: None, Pure Tone, Classical Music, 5Hz binaural beats, 10Hz binaural beats, and 15Hz binaural beats. Main results: We determined that listening to 15Hz binaural beats during an N-Back working memory task increased the individual participant's accuracy, modulated the cortical frequency response, and changed the cortical network connection strengths during the task. Only the 15Hz binaural beats produced significant change in relative accuracy compared to the None condition. Significance: Listening to 15Hz binaural beats during the N-back task activated salient frequency bands and produced networks characterized by higher information transfer as compared to other auditory stimulation conditions.
... However, our data remain consistent with the null hypothesis that PPC is not involved in processing the direction of tactile motion. Interestingly, several neuroimaging studies reported IPS/IPL activations in response to tactile motion, suggesting that our PPC stimulation should have been effective 8,[11][12][13][14][15][16][17] . Again, we believe that careful consideration of the stimulation parameters for tactile motion may resolve the apparent controversy between those imaging results and ours. ...
Article
Full-text available
Human imaging studies have reported activations associated with tactile motion perception in visual motion area V5/hMT+, primary somatosensory cortex (SI) and posterior parietal cortex (PPC; Brodmann areas 7/40). However, such studies cannot establish whether these areas are causally involved in tactile motion perception. We delivered double-pulse transcranial magnetic stimulation (TMS) while moving a single tactile point across the fingertip, and used signal detection theory to quantify perceptual sensitivity to motion direction. TMS over both SI and V5/hMT+, but not the PPC site, significantly reduced tactile direction discrimination. Our results show that V5/hMT+ plays a causal role in tactile direction processing, and strengthen the case for V5/hMT+ serving multimodal motion perception. Further, our findings are consistent with a serial model of cortical tactile processing, in which higher-order perceptual processing depends upon information received from SI. By contrast, our results do not provide clear evidence that the PPC site we targeted (Brodmann areas 7/40) contributes to tactile direction perception.
... The method of functional magnetic resonance tomography is based on the notion of what structures are involved in activation by a moving tactile stimulus: the thalamus, postcentral gyrus and second somatosensory area [44]. Neurovisualization studies revealed the involvement of these human brain structures in the formation of tactile movement [45][46][47][48][49]. When recording brain activity, the palm of the right hand was stimulated by a rotating (static in control) drum the surface of which was composed of discrete fragments [50]. ...
Article
Full-text available
The motion aftereffect may be considered as a consequence of visual illusions of self-motion (vection) and the persistence of sensory information processing. There is ample experimental evidence indicating a uniformity of mechanisms that underlie motion aftereffects in different modalities based on the principle of motion detectors. Currently, there is firm ground to believe that the motion aftereffect is intrinsic to all sensory systems involved in spatial orientation, that motion adaptation in one sensory system elicits changes in another one, and that such adaptation is of great adaptive importance for spatial orientation and motion of an organism. This review seeks to substantiate these ideas.
... However, studies during the last few decades provide evidence that the nature of our sensory experience is mostly multisensory. Numerous studies have demonstrated that visuo-tactile, audio-tactile, and audiovisual integration taking place parallel at numerous levels along brain pathways also produce behavioral responses (Foxe et al., 2002;Hötting et al., 2003;Taylor-Clarke et al., 2004;Budinger et al., 2006;Gillmeister and Eimer, 2007;Nakashita et al., 2008;Mahoney et al., 2014;Staines et al., 2014;Kwon et al., 2017). ...
Article
Full-text available
Today, there is an increased interest in research on lysergic acid diethylamide (LSD) because it may offer new opportunities in psychotherapy under controlled settings. The more we know about how a drug works in the brain, the more opportunities there will be to exploit it in medicine. Here, based on our previously published papers and investigations, we suggest that LSD-induced visual hallucinations/phosphenes may be due to the transient enhancement of bioluminescent photons in the early retinotopic visual system in blind as well as healthy people.
... Human imaging studies suggest that systems for multisensory integration in near-personal space also exist in the human brain. Functional magnetic resonance imaging (fMRI) studies have identified areas in the premotor cortex and intraparietal cortex that respond to both visual and tactile stimulation in relation to specific body parts (Bremmer et al., 2001;Ehrsson et al., 2004;Lloyd, Morrison, & Roberts, 2006;Lloyd, Shore, Spence, & Calvert, 2003 Nakashita et al., 2008;Sereno & Huang, 2006). Lloyd et al. (2003) identified areas in the ventral premotor cortex and intraparietal cortex that were active when a real hand was touched in sight of the observer and showed that these activations were modulated by the position of the arm. ...
... Ptito et al. [2] reported that tactile stimulation of the tongue evoked posterior parietal cortex and extrastriate cortex activities, while James et al. [3] and Sathian et al. [56] found exstrastriate cortex activation during shape discrimination by fingers trials. Nakashita et al. [57] also reported tactile-visual integration in the posterior parietal cortex and Pasalar et al. [58] noted that application of TMS to the posterior parietal cortex disrupted shape discrimination. Furthermore, Hannula et al. [59] and Zangaladze et al. [60] applied TMS stimulation to the primary somatosensory cortex (S1) and confirmed disruption in tactile temporal discrimination. ...
Article
Full-text available
A cross-modal association between somatosensory tactile sensation and parietal and occipital activities during Braille reading was initially discovered in tests with blind subjects, with sighted and blindfolded healthy subjects used as controls. However, the neural background of oral stereognosis remains unclear. In the present study, we investigated whether the parietal and occipital cortices are activated during shape discrimination by the mouth using functional near-infrared spectroscopy (fNIRS). Following presentation of the test piece shape, a sham discrimination trial without the test pieces induced posterior parietal lobe (BA7), extrastriate cortex (BA18, BA19), and striate cortex (BA17) activation as compared with the rest session, while shape discrimination of the test pieces markedly activated those areas as compared with the rest session. Furthermore, shape discrimination of the test pieces specifically activated the posterior parietal cortex (precuneus/BA7), extrastriate cortex (BA18, 19), and striate cortex (BA17), as compared with sham sessions without a test piece. We concluded that oral tactile sensation is recognized through tactile/visual cross-modal substrates in the parietal and occipital cortices during shape discrimination by the mouth.
... By employing common spatial frames of reference, spatial information between the two sensory modalities can be directly compared. Previous neuroimaging studies have identified multiple cortical regions involved in visuo-tactile interaction of macrogeometric properties such as the intraparietal sulcus (IPS) (Grefkes et al., 2002;Kitada et al., 2006;Nakashita et al., 2008;Saito et al., 2003;Tal and Amedi, 2009), claustrum/insula (Hadjikhani and Roland, 1998;Kassuba et al., 2013) and lateral occipital complex (LOC) (Amedi et al., 2001;James et al., 2002;Kassuba et al., 2013;Kim and James, 2010;Zhang et al., 2004). ...
Article
Visual clues as to the physical substance of manufactured objects can be misleading. For example, a plastic ring can appear to be made of gold. However, we can avoid misidentifying an object’s substance by comparing visual and tactile information. As compared to the spatial properties of an object (e.g., orientation), however, little information regarding physical object properties (material properties) is shared between vision and touch. How can such different kinds of information be compared in the brain? One possibility is that the visuo-tactile comparison of material information is mediated by associations that are previously learned between the two modalities. Previous studies suggest that a cortical network involving the medial temporal lobe and precuneus plays a critical role in the retrieval of information from long-term memory. Here, we used functional magnetic resonance imaging (fMRI) to test whether these brain regions are involved in the visuo-tactile comparison of material properties. The stimuli consisted of surfaces in which an oriented plastic bar was placed on a background texture. Twenty-two healthy participants determined whether the orientations of visually- and tactually-presented bar stimuli were congruent in the orientation conditions, and whether visually- and tactually-presented background textures were congruent in the texture conditions. The texture conditions revealed greater activation of the fusiform gyrus, medial temporal lobe and lateral prefrontal cortex compared with the orientation conditions. In the texture conditions, the precuneus showed greater response to incongruent stimuli than to congruent stimuli. This incongruency effect was greater for the texture conditions than for the orientation conditions. These results suggest that the precuneus is involved in detecting incongruency between tactile and visual texture information in concert with the medial temporal lobe, which is tightly linked with long-term memory
... The PPC is also known to play a crucial role in the integration of different modalities of stimuli [24]. When subjects were instructed to perform motion discrimination task under the simultaneous presentation of visual stimulus and tactile stimulus, the left SPL was more prominently activated under the congruent event conditions than under incongruent conditions [25], which indicating SPL involves in cross-modal integration among different sensory modalities. Using intracranial recording [26] and EEG/ ERP recording [27] on humans, the SPL had been showed greater activation to multisensory stimuli than that to the sum of responses to each uni-sensory stimulus. ...
Article
Full-text available
Cross-modal working memory requires integrating stimuli from different modalities and it is associated with co-activation of distributed networks in the brain. However, how brain initiates cross-modal working memory retrieval remains not clear yet. In the present study, we developed a cued matching task, in which the necessity for cross-modal/unimodal memory retrieval and its initiation time were controlled by a task cue appeared in the delay period. Using functional magnetic resonance imaging (fMRI), significantly larger brain activations were observed in the left lateral prefrontal cortex (l-LPFC), left superior parietal lobe (l-SPL), and thalamus in the cued cross-modal matching trials (CCMT) compared to those in the cued unimodal matching trials (CUMT). However, no significant differences in the brain activations prior to task cue were observed for sensory stimulation in the l-LPFC and l-SPL areas. Although thalamus displayed differential responses to the sensory stimulation between two conditions, the differential responses were not the same with responses to the task cues. These results revealed that the frontoparietal-thalamus network participated in the initiation of cross-modal working memory retrieval. Secondly, the l-SPL and thalamus showed differential activations between maintenance and working memory retrieval, which might be associated with the enhanced demand for cognitive resources.
... As a result, our model space consisted of 12 alternative models, each of which was fitted to the data from each individual subject within both hemispheres, with additional interhemispheric connections between all regions. Similar to a recent DCM study of WM (Ma et al. 2011), the visual input (driving) entered the SPL bilaterally (Baizer et al. 1991;Nakashita et al. 2008). Starting from this basic layout, a factorial structured model space was derived by considering where the modulatory effect of the 2-back WM condition might be expressed within both hemispheres (for a graphical summary of the model design see Fig. 1b). ...
Article
Full-text available
It has been proposed that green tea extract may have a beneficial impact on cognitive functioning, suggesting promising clinical implications. However, the neural mechanisms underlying this putative cognitive enhancing effect of green tea extract still remain unknown. This study investigates whether the intake of green tea extract modulates effective brain connectivity during working memory processing and whether connectivity parameters are related to task performance. Using a double-blind, counterbalanced, within-subject design, 12 healthy volunteers received a milk whey-based soft drink containing 27.5 g of green tea extract or a milk whey-based soft drink without green tea as control substance while undergoing functional magnetic resonance imaging. Working memory effect on effective connectivity between frontal and parietal brain regions was evaluated using dynamic causal modeling. Green tea extract increased the working memory induced modulation of connectivity from the right superior parietal lobule to the middle frontal gyrus. Notably, the magnitude of green tea induced increase in parieto-frontal connectivity positively correlated with improvement in task performance. Our findings provide first evidence for the putative beneficial effect of green tea on cognitive functioning, in particular, on working memory processing at the neural system level by suggesting changes in short-term plasticity of parieto-frontal brain connections. Modeling effective connectivity among frontal and parietal brain regions during working memory processing might help to assess the efficacy of green tea for the treatment of cognitive impairments in psychiatric disorders such as dementia.
... As in a recent DCM study of working memory, 38 the visual input (driving) entered the SPL bilaterally. 39,40 Starting from this basic layout, a factorially structured model space was derived by considering where the modulatory effect of the 2-back working memory condition might be expressed ( Fig. 2A). We contrasted models in which the 2-back working memory condition was allowed to modulate, within both hemispheres, the parietofrontal connections, the frontoparietal connections or both (first, second and third row of Fig. 2A, respectively). ...
Article
Full-text available
Recent evidence has revealed abnormal functional connectivity between the frontal and parietal brain regions during working memory processing in patients with schizophrenia and first-episode psychosis. However, it still remains unclear whether abnormal frontoparietal connectivity during working memory processing is already evident in the psychosis high-risk state and whether the connection strengths are related to psychopathological outcomes. Healthy controls and antipsychotic-naive individuals with an at-risk mental state (ARMS) performed an n-back working memory task while undergoing functional magnetic resonance imaging. Effective connectivity between frontal and parietal brain regions during working memory processing were characterized using dynamic causal modelling. Our study included 19 controls and 27 individuals with an ARMS. In individuals with an ARMS, we found significantly lower task performances and reduced activity in the right superior parietal lobule and middle frontal gyrus than in controls. Furthermore, the working memory-induced modulation of the connectivity from the right middle frontal gyrus to the right superior parietal lobule was significantly reduced in individuals with an ARMS, while the extent of this connectivity was negatively related to the Brief Psychiatric Rating Scale total score. The modest sample size precludes a meaningful subgroup analysis for participants with a later transition to psychosis. This study demonstrates that abnormal frontoparietal connectivity during working memory processing is already evident in individuals with an ARMS and is related to psychiatric symptoms. Thus, our results provide further insight into the pathophysiological mechanisms of the psychosis high-risk state by linking functional brain imaging, computational modelling and psychopathology.
... This ROI was previously used by Bekrater-Bodmann et al. [23]. For the IPC, we also used a special ROI likewise created on the basis of previous findings regarding body-related multisensory stimulation [3], [25], [27], [28], [29], [30] (volume of the righthemispheric IPC ROI: 618 voxels; volume of the left-hemispheric IPC ROI: 574 voxels). Due to the strong a priori hypotheses about Table1. ...
Article
Full-text available
In the so-called rubber hand illusion, synchronous visuotactile stimulation of a visible rubber hand together with one's own hidden hand elicits ownership experiences for the artificial limb. Recently, advanced virtual reality setups were developed to induce a virtual hand illusion (VHI). Here, we present functional imaging data from a sample of 25 healthy participants using a new device to induce the VHI in the environment of a magnetic resonance imaging (MRI) system. In order to evaluate the neuronal robustness of the illusion, we varied the degree of synchrony between visual and tactile events in five steps: in two conditions, the tactile stimulation was applied prior to visual stimulation (asynchrony of -300 ms or -600 ms), whereas in another two conditions, the tactile stimulation was applied after visual stimulation (asynchrony of +300 ms or +600 ms). In the fifth condition, tactile and visual stimulation was applied synchronously. On a subjective level, the VHI was successfully induced by synchronous visuotactile stimulation. Asynchronies between visual and tactile input of ±300 ms did not significantly diminish the vividness of illusion, whereas asynchronies of ±600 ms did. The temporal order of visual and tactile stimulation had no effect on VHI vividness. Conjunction analyses of functional MRI data across all conditions revealed significant activation in bilateral ventral premotor cortex (PMv). Further characteristic activation patterns included bilateral activity in the motion-sensitive medial superior temporal area as well as in the bilateral Rolandic operculum, suggesting their involvement in the processing of bodily awareness through the integration of visual and tactile events. A comparison of the VHI-inducing conditions with asynchronous control conditions of ±600 ms yielded significant PMv activity only contralateral to the stimulation site. These results underline the temporal limits of the induction of limb ownership related to multisensory body-related input.
... The principal brain regions devoted to these multimodal flavor integrations are insular and orbitofrontal cortex and also amygdala and entorhinal cortex [18,20]. More traditional multisensory regions such as the intraparietal sulcus and superior frontal cortex are also involved with integrating these different sensory signals [38,39]. These combined sensations enable a complex evaluation of the food to facilitate decisions to ingest or reject the food. ...
Article
Full-text available
The sense of taste is stimulated when nutrients or other chemical compounds activate specialized receptor cells within the oral cavity. Taste helps us decide what to eat and influences how efficiently we digest these foods. Human taste abilities have been shaped, in large part, by the ecological niches our evolutionary ancestors occupied and by the nutrients they sought. Early hominoids sought nutrition within a closed tropical forest environment, probably eating mostly fruit and leaves, and early hominids left this environment for the savannah and greatly expanded their dietary repertoire. They would have used their sense of taste to identify nutritious food items. The risks of making poor food selections when foraging not only entail wasted energy and metabolic harm from eating foods of low nutrient and energy content, but also the harmful and potentially lethal ingestion of toxins. The learned consequences of ingested foods may subsequently guide our future food choices. The evolved taste abilities of humans are still useful for the one billion humans living with very low food security by helping them identify nutrients. But for those who have easy access to tasty, energy-dense foods our sensitivities for sugary, salty and fatty foods have also helped cause over nutrition-related diseases, such as obesity and diabetes.
... The principal brain regions devoted to these multimodal flavor integrations are insular and orbitofrontal cortex and also amygdala and entorhinal cortex [18,20] . More traditional multisensory regions such as the intraparietal sulcus and superior frontal cortex are also involved with integrating these different sensory signals [38,39]. These combined sensations enable a complex evaluation of the food to facilitate decisions to ingest or reject the food. ...
Data
Full-text available
The sense of taste is stimulated when nutrients or other chemical compounds activate specialized receptor cells within the oral cavity. Taste helps us decide what to eat and influences how efficiently we digest these foods. Human taste abilities have been shaped, in large part, by the ecological niches our evolutionary ancestors occupied and by the nutrients they sought. Early hominoids sought nutrition within a closed tropical forest environment, probably eating mostly fruit and leaves, and early hominids left this environment for the savannah and greatly expanded their dietary repertoire. They would have used their sense of taste to identify nutritious food items. The risks of making poor food selections when foraging not only entail wasted energy and metabolic harm from eating foods of low nutrient and energy content, but also the harmful and potentially lethal ingestion of toxins. The learned consequences of ingested foods may subse-quently guide our future food choices. The evolved taste abilities of humans are still useful for the one billion humans living with very low food security by helping them identify nutrients. But for those who have easy access to tasty, energy-dense foods our sensitivities for sugary, salty and fatty foods have also helped cause over nutrition-related diseases, such as obesity and diabetes. Introduction Taste is a sensory modality involving the oral perception of food-derived chemicals that stimulate receptor cells within taste buds. Taste principally serves two functions: it enables the evaluation of foods for toxicity and nutrients while helping us decide what to ingest and it prepares the body to metabolize foods once they have been ingested. Taste percepts are elicited by molecules that stimulate the taste buds in epithelia of the oral cavity and pharynx (back of the throat) [1] (Box 1). Moreover, taste drives a primal sense of 'acceptable' or 'unacceptable' for what is sampled. Taste combines with smell and tactile sensations to form flavors, which allows us to identify and recognize food items as familiar or novel. If familiar, we can anticipate the metabolic consequences of ingesting the food. If novel, we can use these sensory cues to learn about the physiological outcomes of ingestion. If the outcome is positive, taste will signal pleasure and reward — both directly from the plea-surable quality of the taste itself, as well as from associated metabolic consequences. Some animals also use taste to understand social chemical cues, but there is no evidence presently that it plays this role for humans (Box 2). Taste-stimuli are typically released when food is chewed, dissolved into saliva and pre-digested by oral enzymes, such as amylase, lipase, and proteases [2]. Humans, and possibly many other omnivores, perceive nutrients and toxins qualitatively as sweet, salty, sour, savory, and bitter tasting [1]. Simple carbohydrates are experienced as sweet, the amino acids glutamate, aspartate and selected ribonu-cleic acids are experienced as savory (or umami), sodium salts, and salts of a few other cations, are experienced as salty, acids are experienced as sour, and many toxic com-pounds are experienced as bitter. The set of compounds that elicits bitter taste is by far the largest and most structur-ally diverse, and, consequently, humans possess about 25 functional bitter taste receptor genes (T2Rs). In addition, a variety of other nutrient taste qualities have been suggested, including specific taste percepts from water, starch, malto-dextrins, calcium, and fatty acids [3]. There is, however, presently little agreement on how humans perceive these chemicals and, consequently, on whether we would describe our oral experiences with them as unique tastes. Humans taste with the edges and dorsal surface of the tongue, soft palate (the roof of the mouth toward the back of the oral cavity), and pharynx (Figure 1) [4]. These tissues comprise the gustatory epithelia. We do not taste with our lips, the underside of our tongue, our hard palate (behind our upper incisors), or the inside of our cheeks, although young children may have taste buds in more areas of the oral cavity than do adults [5]. The sensory organ within these epithelia is the taste bud — a microscopic rosette shaped cluster of approximately 80–100 receptor cells, in which chemicals are detected by transmembrane receptors (Figure 1) [4]. The human taste receptors have not all been confirmed in vivo except for selected toxin/bitter and a glutamate/umami receptor. Nevertheless, for many stimuli there are strong hypotheses of the identity of human taste receptors based on mouse and fly research. The principal receptor hypothesized to transduce human sweet stimuli is T1R2/T1R3, for umami stimuli it is T1R1/T1R3 (although mGluR1, mGluR4 and NMDA have been implicated), and for bitter taste stimuli it is the family of T2Rs. For salty stimuli there is growing evidence that the epithelial sodium channel (ENaC), in part, transduces salty taste, and for sour taste stimuli acid sensing ion channels (ASICs) and possibly other proton detectors are involved. Whereas it was once hypoth-esized that these receptors should be expressed in partic-ular zones according to presumed taste quality regions of the mouth, we now believe that the receptor expression zones are heavily overlapping in most regions of the mouth. The taste bud serves as the first stage of gustatory signal processing and there are many ways in which cells within a bud communicate with one another, including electric coupling via gap junctions and cell to cell chemical commu-nication via glutamate, serotonin, and ATP among other possible transmitters [6]. Taste buds reside within small bumps or folds on the tongue, called 'papillae', in addition to the smooth epithelia of the soft palate and pharynx [1] (Figure 1). Taste receptor cells within the buds are electrically active epithelial cells that can depolarize and release neurotransmitters. Whereas these taste receptor cells are not neurons themselves, they do communicate with nearby neurons via synaptic transmission and intercellular com-munication using ATP and other neurochemicals [6,7]. Taste receptor cells are continuously replaced in the bud every 9 to 15 days, to compensate for mechanical, thermal, or toxin-induced damage to the gustatory epithelia [8]. Moreover,
... A late stage involvement (i.e., higher visual cortex areas of motion processing) can be confirmed if the global motion processing deficit occurs without a deficit in early visual processing [58]. Nevertheless, indirect evidence of late stage cortical processing deficits for quiet stance control may be suggested from studies using fMRI during visuomotor activities in children with DCD6465. Both Kashiwagi's and Zwicker's studies show different activation levels of higher visual cortex areas compared to TD children, and Zwicker et al. [64] interpret observed higher cortical activation pattern as greater reliance on vision information to complete motor tasks. ...
Article
Full-text available
Developmental Coordination Disorder (DCD) is a leading movement disorder in children that commonly involves poor postural control. Multisensory integration deficit, especially the inability to adaptively reweight to changing sensory conditions, has been proposed as a possible mechanism but with insufficient characterization. Empirical quantification of reweighting significantly advances our understanding of its developmental onset and improves the characterization of its difference in children with DCD compared to their typically developing (TD) peers. Twenty children with DCD (6.6 to 11.8 years) were tested with a protocol in which visual scene and touch bar simultaneously oscillateded medio-laterally at different frequencies and various amplitudes. Their data were compared to data on TD children (4.2 to 10.8 years) from a previous study. Gains and phases were calculated for medio-lateral responses of the head and center of mass to both sensory stimuli. Gains and phases were simultaneously fitted by linear functions of age for each amplitude condition, segment, modality and group. Fitted gains and phases at two comparison ages (6.6 and 10.8 years) were tested for reweighting within each group and for group differences. Children with DCD reweight touch and vision at a later age (10.8 years) than their TD peers (4.2 years). Children with DCD demonstrate a weak visual reweighting, no advanced multisensory fusion and phase lags larger than those of TD children in response to both touch and vision. Two developmental perspectives, postural body scheme and dorsal stream development, are provided to explain the weak vision reweighting. The lack of multisensory fusion supports the notion that optimal multisensory integration is a slow developmental process and is vulnerable in children with DCD.
... To have coherent and unified percepts of multimodal events (e.g., containing sound and light as in the case of a moving car or a vocalizing conspecific), sensory information across these apparently divergent pathways needs to be integrated. Integration of information across two or more sensory channels involves multiple subcortical structures [1][2][3][4], as well as cortical regions (e.g., parietal cortex [5,6], the superior temporal sulcus [7][8][9], and the insular cortex [10,11]). ...
Article
Full-text available
Recent studies suggest that exposure to only one component of audiovisual events can lead to cross-modal cortical activation. However, it is not certain whether such crossmodal recruitment can occur in the absence of explicit conditioning, semantic factors, or long-term associations. A recent study demonstrated that crossmodal cortical recruitment can occur even after a brief exposure to bimodal stimuli without semantic association. In addition, the authors showed that the primary visual cortex is under such crossmodal influence. In the present study, we used molecular activity mapping of the immediate early gene zif268. We found that animals, which had previously been exposed to a combination of auditory and visual stimuli, showed increased number of active neurons in the primary visual cortex when presented with sounds alone. As previously implied, this crossmodal activation appears to be the result of implicit associations of the two stimuli, likely driven by their spatiotemporal characteristics; it was observed after a relatively short period of exposure (~45 min) and lasted for a relatively long period after the initial exposure (~1 day). These results suggest that the previously reported findings may be directly rooted in the increased activity of the neurons occupying the primary visual cortex.
... On the other hand, other studies point clearly to neural interactions between the senses, using electrophysiological and imaging techniques (e.g., Bolognini and Maravita, 2007;Nakashita et al., 2008;Wang et al., 2008;Kayser et al., 2009; Cross-sensory facilitation reveals neural interactions between visual and tactile motion in humans Many recent studies show that the human brain integrates information across the different senses and that stimuli of one sensory modality can enhance the perception of other modalities. Here we study the processes that mediate cross-modal facilitation and summation between visual and tactile motion. ...
Article
Full-text available
Many recent studies show that the human brain integrates information across the different senses and that stimuli of one sensory modality can enhance the perception of other modalities. Here we study the processes that mediate cross-modal facilitation and summation between visual and tactile motion. We find that while summation produced a generic, non-specific improvement of thresholds, probably reflecting higher-order interaction of decision signals, facilitation reveals a strong, direction-specific interaction, which we believe reflects sensory interactions. We measured visual and tactile velocity discrimination thresholds over a wide range of base velocities and conditions. Thresholds for both visual and tactile stimuli showed the characteristic “dipper function,” with the minimum thresholds occurring at a given “pedestal speed.” When visual and tactile coherent stimuli were combined (summation condition) the thresholds for these multisensory stimuli also showed a “dipper function” with the minimum thresholds occurring in a similar range to that for unisensory signals. However, the improvement of multisensory thresholds was weak and not directionally specific, well predicted by the maximum-likelihood estimation model (agreeing with previous research). A different technique (facilitation) did, however, reveal direction-specific enhancement. Adding a non-informative “pedestal” motion stimulus in one sensory modality (vision or touch) selectively lowered thresholds in the other, by the same amount as pedestals in the same modality. Facilitation did not occur for neutral stimuli like sounds (that would also have reduced temporal uncertainty), nor for motion in opposite direction, even in blocked trials where the subjects knew that the motion was in the opposite direction showing that the facilitation was not under subject control. Cross-sensory facilitation is strong evidence for functionally relevant cross-sensory integration at early levels of sensory processing.
... The consequence of this finding, namely that body ownership itself is not necessarily mediated by the PMv, is an important merit of the study, complementing convincingly the theoretical concept of the function of PMv. Gentile et al. (2011) showed that the PMv reacts in an additive manner to incoming sensory input; i.e., the PMv is more active during visuotactile stimulation of one's own hand than during visual perception or tactile stimulation alone, indicating an integrative, stimulationlinking role of the PMv, which was also shown for the posterior parietal cortex (PCp) (Nakashita et al., 2008). Activity in both structures is positively correlated with vividness of RHI perceptions (Ehrsson et al., 2004), complementing findings which showed that both the PCp and the PMv can bridge the gap between multimodal sensory mismatches (Bremmer et al., 2001). ...
Article
The perception of one's own hand is an eminently coherent impression: proprioceptive, tactile, and visual inputs usually correspond perfectly. Current theoretical frameworks postulate that this multimodal integration is required for a feeling of ownership of the body, and is ultimately accompanied
Chapter
To perceive our world, we make full use of multiple sources of sensory information derived from different modalities which include five basic sensory systems; visual, auditory, tactile, olfactory, and gustatory. In the real world, we normally simultaneously acquire information from different sensory receptors. Therefore, multisensory integration in the brain plays an important role in performance and perception. This review focuses on the crossmodal between the visual and tactile. Many previous studies have indicated that visual information effects tactile perception and in return, tactile perception is also active in the MT, the main visual motion information processing area. However, few studies have explored how information of the crossmodal between the visual and tactile is processed. Here, the authors highlight the processing mechanism of the crossmodal in the brain. They show that integration between the visual and tactile has two stages: combination and integration.
Chapter
In this chapter, based on the previously discussed experimental data, and on a wider review of the relevant neurosemantic literature, I provide a more detailed discussion of the specific neural regions which underpin, respectively, inferential competence and referential competence.
Article
Full-text available
Here, we briefly overview the various aspects of classic serotonergic hallucinogens reported by a number of studies. One of the key hypotheses of our paper is that the visual effects of psychedelics might play a key role in resetting fears. Namely, we especially focus on visual processes, since they are among the most prominent features of hallucinogen-induced hallucinations. We hypothesize that our brain has an ancient visual-based (preverbal) intrinsic cognitive process that, during the transient inhibition of top-down convergent and abstract thinking (mediated by the prefrontal cortex) by psychedelics, can neutralize emotional fears of unconscious and conscious life experiences from the past. In these processes, the decreased functional integrity of the self-referencing processes of the default mode network, the modified multisensory integration (linked to bodily self-consciousness and self-awareness), and the modified amygdala activity may also play key roles. Moreover, the emotional reset (elimination of stress-related emotions) by psychedelics may induce psychological changes and overwrite the stress-related neuroepigenetic information of past unconscious and conscious emotional fears.
Chapter
Humans can haptically identify common three-dimensional objects surprisingly well. What are the neural mechanisms underlying this ability? Previous neuroimaging studies have shown that haptic object recognition involves a distributed network of brain regions beyond the conventional somatosensory cortices. However, the relative contributions of these regions to haptic object recognition are not well understood. In this chapter, I discuss three key hypotheses concerning the brain network underlying haptic object processing and its interaction with visual object processing. The first is that the occipito-temporal cortex, which has been considered to be part of the conventional visual cortex, plays a critical role in the haptic identification of common objects. The second is that distinct brain regions are involved in the haptic processing of two types of feature used for object identification: macro-geometric (e.g., shape) and material (e.g., roughness) properties. The third is that different brain regions are also involved in the visuo-haptic interaction of macro-geometric and material properties. Finally, I discuss some issues that remain to be addressed in future studies.
Article
In non-human primates, the tactile representation at the cortical level has mostly been studied using single cell recordings targeted to specific cortical areas. In this study, we explored the representation of tactile information delivered to the face or the shoulders at the whole-brain level, using functional magnetic resonance imaging (fMRI) in the non-human primate. We used air-puffs delivered either to the center of the face, the periphery of the face or the shoulders. These stimulations elicited activations in numerous cortical areas, encompassing the primary and secondary somatosensory areas, prefrontal and premotor areas, parietal, temporal and cingulate areas, as well as low-level visual cortex. Importantly, a specific parieto-temporo-prefrontal network responded to the three stimulations but presented a marked preference for air-puffs directed to the center of the face. This network corresponds to areas that are also involved in near space representation as well as in the multisensory integration of information at the interface between this near space and the skin of the face, and is probably involved in the construction of a peripersonal space representation around the head.
Conference Paper
Functional magnetic resonance imaging (fMRI), one of the most popular forms of neuroimaging, uses MRI to measure the hemodynamic response related to neural activity. In the present study, we developed a tactile speed stimulator for fMRI environment. The device is MRI-compatible and can serve to investigate the underlying neural mechanisms of tactile speed discrimination. The primary components of the tactile speed stimulator system include a computer, two drums with dots, a motor controller and a reaction key. We evaluated the function, precision and performance of the system in a magnetic field. The results showed that the device performance is unaffected by the magnetic field, nor does the device interfere with the magnetic field, making it usable with fMRI. Furthermore, a simple pressing button in fMRI experiment was conducted using the system. Compared to the baseline, the most prominent activation areas evoked by the button press task were in the lobulus parietalis inferior, gyrus postcentralis, gyrus frontalis inferior and gyrus precentralis.In conclusion, these results indicated that the brain activation can be reliably detected with the present device.
Article
Full-text available
Importance: Brain imaging studies have identified robust changes in brain structure and function during the development of psychosis, but the contribution of abnormal brain connectivity to the onset of psychosis is unclear. Furthermore, antipsychotic treatment can modulate brain activity and functional connectivity during cognitive tasks. Objectives: To investigate whether dysfunctional brain connectivity during working memory (WM) predates the onset of psychosis and whether connectivity parameters are related to antipsychotic treatment. Design: Dynamic causal modeling study of functional magnetic resonance imaging data. Setting: Participants were recruited from the specialized clinic for the early detection of psychosis at the Department of Psychiatry, University of Basel, Basel, Switzerland. Participants: Seventeen participants with an at-risk mental state (mean [SD] age, 25.24 [6.3] years), 21 individuals with first-episode psychosis (mean [SD] age, 28.57 [7.2] years), and 20 healthy controls (mean [SD] age, 26.5 [4] years). Main outcome and measure: Functional magnetic resonance imaging data were recorded while participants performed an N-back WM task. Functional interactions among brain regions involved in WM, in particular between frontal and parietal brain regions, were characterized using dynamic causal modeling. Bayesian model selection was performed to evaluate the likelihood of alternative WM network architectures across groups, whereas bayesian model averaging was used to examine group differences in connection strengths. Results: We observed a progressive reduction in WM-induced modulation of connectivity from the middle frontal gyrus to the superior parietal lobule in the right hemisphere in healthy controls, at-risk mental state participants, and first-episode psychosis patients. Notably, the abnormal modulation of connectivity in first-episode psychosis patients was normalized by treatment with antipsychotics. Conclusions and relevance: Our findings suggest that the vulnerability to psychosis is associated with a progressive failure of functional integration of brain regions involved in WM processes, including visual encoding and rule updating, and that treatment with antipsychotics may have the potential to counteract this.
Article
The planum temporale (PT) is a highly lateralized brain area associated with auditory and language processing. In schizophrenia, reduced structural and functional laterality of the PT has been suggested, which is of clinical interest because of its potential role in the generation of auditory verbal hallucinations. We investigated whether resting-state functional imaging (fMRI) of the PT reveals aberrant functional connectivity and laterality in patients with schizophrenia (SZ) and unaffected relatives, and examined possible associations between altered intrinsic functional organization of auditory networks and hallucinations. We estimated functional connectivity between bilateral PT and whole-brain in 24 SZ patients, 22 unaffected first-degree relatives and 24 matched healthy controls. The results indicated reduced functional connectivity between PT and temporal, parietal, limbic and subcortical regions in SZ patients and relatives in comparison with controls. Altered functional connectivity correlated with predisposition towards hallucinations (measured with the Revised Hallucination Scale [RHS]) in both patients and relatives. We also observed reduced functional asymmetry of the superior temporal gyrus in patients and relatives, which correlated significantly with acute severity of hallucinations in the patient group. To conclude, SZ patients and relatives showed abnormal asymmetry and aberrant connectivity in the planum temporale during resting-state, which was related to psychopathology. These results are in line with results from auditory processing and symptom-mapping studies that suggest that the PT is a central node in the generation of hallucinations. Our findings support reduced intrinsic functional hemispheric asymmetry of the auditory network as a possible trait marker in schizophrenia.
Article
Brodmann's area 5 is implicated in the sensorimotor control of hand movement in humans and nonhuman primates. However, little is known about the influence of area 5 on the neural circuitry within the primary motor cortex that underpins hand control. The present study investigated the neural circuitry of interhemispheric inhibition (IHI) that exists between homologous muscle representations in the motor cortex. Using paired-pulse transcranial magnetic stimulation, IHI was probed from the left-to-right hemisphere and vice versa for the first dorsal interosseous muscle of the hand at short (10 ms) and long (40 ms) latencies before and for up to 1 h after continuous θ-burst stimulation over left hemisphere area 5. The results indicate that continuous θ-burst over area 5 increases IHI at short latencies in the left hand (left-to-right inhibition) from 5-20 and 45-60 min after stimulation. Short latency inhibition in the right hand and bilateral long latency inhibition remain unaltered. The data indicate that area 5 influences the IHI that exists between the representations of the hand muscles. This effect occurs ipsilateral to the left area 5, suggesting that effects are mediated through changes in the excitability of transcallosal neurons originating in the left motor cortex.
Article
The role of sensory information in motor control has been studied, but the cortical processing underlying cross-modal relationship between visual and somatosensory information for movement execution remains a matter of debate. Visual estimates of limb positions are congruent with proprioceptive estimates under normal visual conditions, but a mismatch between the watched and felt movement of the hand disrupts motor execution. We investigated whether activation in somatosensory areas was affected by the discordance between the intended and an executed action. Subjects performed self-paced thumb movement of the left hand under normal visual and mirror conditions. The Mirror condition provided a non-veridical and unexpected visual feedback. The results showed activity in the primary somatosensory area to be inhibited and activity in the secondary somatosensory area (SII) to be enhanced with voluntary movement, and neural responses in the SII and parietal cortex were strongly affected by the unexpected visual feedback. These results provide evidence that the visual information plays a crucial role in activation in somatosensory areas during motor execution. A mechanism that monitors sensory inputs and motor outputs congruent with current intension is necessary to control voluntary movement.
Article
Full-text available
▪ Abstract The dorsal premotor cortex is a functionally distinct cortical field or group of fields in the primate frontal cortex. Anatomical studies have confirmed that most parietal input to the dorsal premotor cortex originates from the superior parietal lobule. However, these projections arise not only from the dorsal aspect of area 5, as has long been known, but also from newly defined areas of posterior parietal cortex, which are directly connected with the extrastriate visual cortex. Thus, the dorsal premotor cortex receives much more direct visual input than previously accepted. It appears that this fronto-parietal network functions as a visuomotor controller—one that makes computations based on proprioceptive, visual, gaze, attentional, and other information to produce an output that reflects the selection, preparation, and execution of movements.
Article
Full-text available
A set of three experiments was performed to investigate the role of visual imaging in the haptic recognition of raised-line depictions of common objects. Blindfolded, sighted (Experiment 1) observers performed the task very poorly, while several findings converged to indicate that a visual translation process was adopted. These included: (1) strong correlations between imageability ratings (obtained in Experiment 1 and, independently, in Experiment 2) and both recognition speed and accuracy, (2) superior performance with, and greater ease of imaging, twodimensional as opposed to three-dimensional depictions, despite equivalence in rated line complexity, and (3) a significant correlation between the general ability ofthe observer to image and obtained imageability ratings of the stimulus depictions. That congenitally blind observers performed the same task even more poorly, while their performance did not differ for two- versus three-dimensional depictions (Experiment 3), provides further evidence that visual translation was used by the sighted. Such limited performance is contrasted with the considerable skill with which real common objects are processed and recognized haptically. The reasons for the general difference in the haptic performance of two- versus three-dimensional tasks are considered. Implications for the presentation of spatial information in the form of tangible graphics displays for the blind are also discussed.
Article
Full-text available
A set of three experiments was performed to investigate the role of visual imaging in the haptic recognition of raised-line depictions of common objects. Blindfolded, sighted (Experiment 1) observers performed the task very poorly, while several findings converged to indicate that a visual translation process was adopted. These included (1) strong correlation between image-ability ratings (obtained in Experiment 1 and, independently, in Experiment 2) and both recognition speed and accuracy, (2) superior performance with, and greater ease of imaging, two-dimensional as opposed to three-dimensional depictions, despite equivalence in rated line complexity, and (3) a significant correlation between the general ability of the observer to image and obtained imageability ratings of the stimulus depictions. That congenitally blind observers performed the same task even more poorly, while their performance did not differ for two- versus three-dimensional depictions (Experiment 3), provides further evidence that visual translation was used by the sighted. Such limited performance is contrasted with the considerable skill with which real common objects are processed and recognized haptically. The reasons for the general difference in the haptic performance of two- versus three-dimensional tasks are considered. Implications for the presentation of spatial information in the form of tangible graphics displays for the blind are also discussed.
Article
Full-text available
Reaction times (RTs) to a photic stimulus were evaluated as a function of the interval between a flash and a click which followed at varying intervals (ranging 20-120 msec.) in a choice RT task. RT to flash was found to be a linear function of the interstimulus interval between the flash and the click. The shorter the interval, the faster the RT. With choice RT, mean responses to flash-click pairs separated by as much as 120 msec. were reliably faster than those obtained to flash presented alone.
Article
Full-text available
In this paper, we show that human saccadic eye movements toward a visual target are generated with a reduced latency when this target is spatially and temporally aligned with an irrelevant auditory nontarget. This effect gradually disappears if the temporal and/or spatial alignment of the visual and auditory stimuli are changed. When subjects are able to accurately localize the auditory stimulus in two dimensions, the spatial dependence of the reduction in latency depends on the actual radial distance between the auditory and the visual stimulus. If, however, only the azimuth of the sound source can be determined by the subjects, the horizontal target separation determines the strength of the interaction. Neither saccade accuracy nor saccade kinematics were affected in these paradigms. We propose that, in addition to an aspecific warning signal, the reduction of saccadic latency is due to interactions that take place at a multimodal stage of saccade programming, where the perceived positions of visual and auditory stimuli are represented in a common frame of reference. This hypothesis is in agreement with our finding that the saccades often are initially directed to the average position of the visual and the auditory target, provided that their spatial separation is not too large. Striking similarities with electrophysiological findings on multisensory interactions in the deep layers of the midbrain superior colliculus are discussed.
Article
Full-text available
Using noninvasive functional magnetic resonance imaging (fMRI) technique, we analyzed the responses in human area MT with regard to visual motion, color, and luminance contrast sensitivity, and retinotopy. As in previous PET studies, we found that area MT responded selectively to moving (compared to stationary) stimuli. The location of human MT in the present fMRI results is consistent with that of MT in earlier PET and anatomical studies. In addition we found that area MT has a much higher contrast sensitivity than that in several other areas, including primary visual cortex (V1). Functional MRI half-amplitudes in V1 and MT occurred at approximately 15% and 1% luminance contrast, respectively. High sensitivity to contrast and motion in MT have been closely associated with magnocellular stream specialization in nonhuman primates. Human psychophysics indicates that visual motion appears to diminish when moving color-varying stimuli are equated in luminance. Electrophysiological results from macaque MT suggest that the human percept could be due to decreases in firing of area MT cells at equiluminance. We show here that fMRI activity in human MT does in fact decrease at and near individually measured equiluminance. Tests with visuotopically restricted stimuli in each hemifield produced spatial variations in fMRI activity consistent with retinotopy in human homologs of macaque areas V1, V2, V3, and VP. Such activity in area MT appeared much less retinotopic, as in macaque. However, it was possible to measure the interhemispheric spread of fMRI activity in human MT (half amplitude activation across the vertical meridian = approximately 15 degrees).
Article
Full-text available
The functional dissociation of human extrastriate cortical processing streams for the perception of face identity and location was investigated in healthy men by measuring visual task-related changes in regional cerebral blood flow (rCBF) with positron emission tomography (PET) and H2(15)O. Separate scans were obtained while subjects performed face matching, location matching, or sensorimotor control tasks. The matching tasks used identical stimuli for some scans and stimuli of equivalent visual complexity for others. Face matching was associated with selective rCBF increases in the fusiform gyrus in occipital and occipitotemporal cortex bilaterally and in a right prefrontal area in the inferior frontal gyrus. Location matching was associated with selective rCBF increases in dorsal occipital, superior parietal, and intraparietal sulcus cortex bilaterally and in dorsal right premotor cortex. Decreases in rCBF, relative to the sensorimotor control task, were observed for both matching tasks in auditory, auditory association, somatosensory, and midcingulate cortex. These results suggest that, within a sensory modality, selective attention is associated with increased activity in those cortical areas that process the attended information but is not associated with decreased activity in areas that process unattended visual information. Selective attention to one sensory modality, on the other hand, is associated with decreased activity in cortical areas dedicated to processing input from other sensory modalities. Direct comparison of our results with those from other PET-rCBF studies of extrastriate cortex demonstrates agreement in the localization of cortical areas mediating face and location perception and dissociations between these areas and those mediating the perception of color and motion.
Article
Full-text available
In pursuing our work on the organization of human visual cortex, we wanted to specify more accurately the position of the visual motion area (area V5) in relation to the sulcal and gyral pattern of the cerebral cortex. We also wanted to determine the intersubject variation of area V5 in terms of position and extent of blood flow change in it, in response to the same task. We therefore used positron emission tomography (PET) to determine the foci of relative cerebral blood flow increases produced when subjects viewed a moving checkerboard pattern, compared to viewing the same pattern when it was stationary. We coregistered the PET images from each subject with images of the same brain obtained by magnetic resonance imaging, thus relating the position of V5 in all 24 hemispheres examined to the individual gyral configuration of the same brains. This approach also enabled us to examine the extent to which results obtained by pooling the PET data from a small group of individuals (e.g., six), chosen at random, would be representative of a much larger sample in determining the mean location of V5 after transformation into Talairach coordinates. After stereotaxic transformation of each individual brain, we found that the position of area V5 can vary by as much as 27 mm in the left hemisphere and 18 mm in the right for the pixel with the highest significance for blood flow change. There is also an intersubject variability in blood flow change within it in response to the same visual task. V5 nevertheless bears a consistent relationship, within each brain, to the sulcal pattern of the occipital lobe. It is situated ventrolaterally, just posterior to the meeting point of the ascending limb of the inferior temporal sulcus and the lateral occipital sulcus. In position it corresponds almost precisely with Flechsig's Feld 16, one of the areas that he found to be myelinated at birth.
Article
Full-text available
The concept of working memory is central to theories of human cognition because working memory is essential to such human skills as language comprehension and deductive reasoning. Working memory is thought to be composed of two parts, a set of buffers that temporarily store information in either a phonological or visuospatial form, and a central executive responsible for various computations such as mental arithmetic. Although most data on working memory come from behavioural studies of normal and brain-injured humans, there is evidence about its physiological basis from invasive studies of monkeys. Here we report positron emission tomography (PET) studies of regional cerebral blood flow in normal humans that reveal activation in right-hemisphere prefrontal, occipital, parietal and premotor cortices accompanying spatial working memory processes. These results begin to uncover the circuitry of a working memory system in humans.
Article
Full-text available
Primary visual cortex receives visual input from the eyes through the lateral geniculate nuclei, but is not known to receive input from other sensory modalities. Its level of activity, both at rest and during auditory or tactile tasks, is higher in blind subjects than in normal controls, suggesting that it can subserve nonvisual functions; however, a direct effect of non-visual tasks on activation has not been demonstrated. To determine whether the visual cortex receives input from the somatosensory system we used positron emission tomography (PET) to measure activation during tactile discrimination tasks in normal subjects and in Braille readers blinded in early life. Blind subjects showed activation of primary and secondary visual cortical areas during tactile tasks, whereas normal controls showed deactivation. A simple tactile stimulus that did not require discrimination produced no activation of visual areas in either group. Thus in blind subjects, cortical areas normally reserved for vision may be activated by other sensory modalities.
Article
Full-text available
Studies on nonhuman primates show that the premotor (PM) and prefrontal (PF) areas are necessary for the arbitrary mapping of a set of stimuli onto a set of responses. However, positron emission tomography (PET) measurements of regional cerebral blood flow (rCBF) in human subjects have failed to reveal the predicted rCBF changes during such behavior. We therefore studied rCBF while subjects learned two arbitrary mapping tasks. In the conditional motor task, visual stimuli instructed which of four directions to move a joystick (with the right, dominant hand). In the evaluation task, subjects moved the joystick in a predetermined direction to report whether an arrow pointed in the direction associated with a given stimulus. For both tasks there were three rules: for the nonspatial rule, the pattern within each stimulus determined the correct direction; for the spatial rule, the location of the stimulus did so; and for the fixed-response rule, movement direction was constant regardless of the pattern or its location. For the nonspatial rule, performance of the evaluation task led to a learning-related increase in rCBF in a caudal and ventral part of the premotor cortex (PMvc, area 6), bilaterally, as well as in the putamen and a cingulate motor area (CM, area 24) of the left hemisphere. Decreases in rCBF were observed in several areas: the left ventro-orbital prefrontal cortex (PFv, area 47/12), the left lateral cerebellar hemisphere, and, in the right hemisphere, a dorsal and rostral aspect of PM (PMdr, area 6), dorsal PF (PFd, area 9), and the posterior parietal cortex (area 39/40). During performance of the conditional motor task, there was only a decrease in the parietal area. For the spatial rule, no rCBF change reached significance for the evaluation task, but in the conditional motor task, a ventral and rostral premotor region (PMvr, area 6), the dorsolateral prefrontal cortex (PFdl, area 46), and the posterior parietal cortex (area 39/40) showed decreasing rCBF during learning, all in the right hemisphere. These data confirm the predicted rCBF changes in premotor and prefrontal areas during arbitrary mapping tasks and suggest that a broad frontoparietal network may show decreased synaptic activity as arbitrary rules become more familiar.
Article
Full-text available
To explore the neural networks used for Braille reading, we measured regional cerebral blood flow with PET during tactile tasks performed both by Braille readers blinded early in life and by sighted subjects. Eight proficient Braille readers were studied during Braille reading with both right and left index fingers. Eight-character, non-contracted Braille-letter strings were used, and subjects were asked to discriminate between words and non-words. To compare the behaviour of the brain of the blind and the sighted directly, non-Braille tactile tasks were performed by six different blind subjects and 10 sighted control subjects using the right index finger. The tasks included a non-discrimination task and three discrimination tasks (angle, width and character). Irrespective of reading finger (right or left), Braille reading by the blind activated the inferior parietal lobule, primary visual cortex, superior occipital gyri, fusiform gyri, ventral premotor area, superior parietal lobule, cerebellum and primary sensorimotor area bilaterally, also the right dorsal premotor cortex, right middle occipital gyrus and right prefrontal area. During non-Braille discrimination tasks, in blind subjects, the ventral occipital regions, including the primary visual cortex and fusiform gyri bilaterally were activated while the secondary somatosensory area was deactivated. The reverse pattern was found in sighted subjects where the secondary somatosensory area was activated while the ventral occipital regions were suppressed. These findings suggest that the tactile processing pathways usually linked in the secondary somatosensory area are rerouted in blind subjects to the ventral occipital cortical regions originally reserved for visual shape discrimination.
Article
We studied the corticocortical connections of architectonically defined areas of parietal and temporoparietal cortex, with emphasis on areas in the intraparietal sulcus (IPS) that are implicated in visual and somatosensory integration. Retrograde tracers were injected into selected areas of the IPS, superior temporal sulcus, and parietal lobule. The distribution of labeled cells was charted in relation to architectonically defined borders throughout the hemisphere and displayed on computer-generated three-dimensional reconstructions and on cortical flat maps. Injections centered in the ventral intraparietal area (VIP) revealed a complex pattern of inputs from numerous visual, somatosensory, motor, and polysensory areas, and from presumed vestibular- and auditory-related areas. Sensorimotor projections were predominantly from the upper body representations of at least six somatotopically organized areas. In contrast, injections centered in the neighboring ventral lateral intraparietal area (LIPv) revealed inputs mainly from extrastriate visual areas, consistent with previous studies. The pattern of inputs to LIPv largely overlapped those to zone MSTdp, a newly described subdivision of the medial superior temporal area. These results, in conjunction with those from injections into other parietal areas (7a, 7b, and anterior intraparietal area), support the fine-grained architectonic partitioning of cortical areas described in the preceding study. They also support and extend previous evidence for multiple distributed networks that are implicated in multimodal integration, especially with regard to area VIP. J. Comp. Neurol. 428:112–137, 2000. © 2000 Wiley-Liss, Inc.
Article
Article
To explore the neural networks used for Braille reading, we measured regional cerebral blood flow with PET during tactile tasks performed both by Braille readers blinded early in life and by sighted subjects. Eight proficient Braille readers were studied during Braille reading with both right and left index fingers. Eight-character, non-contracted Braille-letter strings were used, and subjects were asked to discriminate between words and non-words. To compare the behaviour of the brain of the blind and the sighted directly, non-Braille tactile tasks were performed by six different blind subjects and 10 sighted control subjects using the right index finger. The tasks included a non-discrimination task and three discrimination tasks (angle, width and character). Irrespective of reading finger (right or left), Braille reading by the blind activated the inferior parietal lobule, primary visual cortex, superior occipital gyri, fusiform gyri, ventral premotor area, superior parietal lobule, cerebellum and primary sensorimotor area bilaterally, also the right dorsal premotor cortex, right middle occipital gyrus and right prefrontal area. During non-Braille discrimination tasks, in blind subjects, the ventral occipital regions, including the primary visual cortex and fusiform gyri bilaterally were activated while the secondary somatosensory area was deactivated. The reverse pattern was found in sighted subjects where the secondary somatosensory area was activated while the ventral occipital regions were suppressed. These findings suggest that the tactile processing pathways usually linked in the secondary somatosensory area are rerouted in blind subjects to the ventral occipital cortical regions originally reserved for visual shape discrimination.
Article
This fMRI study investigated the human somatosensory system, especially the secondary somatosensory cortex (SII), with respect to its potential somatotopic organization. Eight subjects received electrical stimulation on their right second finger, fifth finger and hallux. Within SII, the typical finding for both fingers was a representation site within the contralateral parietal operculum roughly halfway between the lip of the lateral sulcus and its fundus, whereas the representation site of the hallux was found more medially to this position at the fundus of the lateral sulcus, near the posterior pole of the insula. Somatotopy in SII seems to be less fine-grained than in primary somatosensory cortex (SI), as, in contrast to SI, no separate representations of the two fingers in SII were observed. A similar somatotopic representation pattern between fingers and the hallux was also observed within ipsilateral SII, indicating somatotopy of contra- as well as ipsilateral SII using unilateral stimulation. Further areas exhibiting activation were found in the superior and inferior parietal lobule, in the supplementary and cingulate motor area, and in the insula.
Article
In monkeys, posterior parietal and premotor cortex play an important integrative role in polymodal motion processing. In contrast, our understanding of the convergence of senses in humans is only at its beginning. To test for equivalencies between macaque and human polymodal motion processing, we used functional MRI in normals while presenting moving visual, tactile, or auditory stimuli. Increased neural activity evoked by all three stimulus modalities was found in the depth of the intraparietal sulcus (IPS), ventral premotor, and lateral inferior postcentral cortex. The observed activations strongly suggest that polymodal motion processing in humans and monkeys is supported by equivalent areas. The activations in the depth of IPS imply that this area constitutes the human equivalent of macaque area VIP.
Article
This paper concerns the spatial and intensity transformations that map one image onto another. We present a general technique that facilitates nonlinear spatial (stereotactic) normalization and image realignment. This technique minimizes the sum of squares between two images following nonlinear spatial deformations and transformations of the voxel (intensity) values. The spatial and intensity transformations are obtained simultaneously, and explicitly, using a least squares solution and a series of linearising devices. The approach is completely noninteractive (automatic), nonlinear, and noniterative. It can be applied in any number of dimensions. Various applications are considered, including the realignment of functional magnetic resonance imaging (MRI) time-series, the linear (affine) and nonlinear spatial normalization of positron emission tomography (PET) and structural MRI images, the coregistration of PET to structural MRI, and, implicitly, the conjoining of PET and MRI to obtain high resolution functional images.
Article
Three-dimensional MRI imaging techniques offer new possibilities for qualitative and quantitative studies of gross neuroanatomy, functional neuroanatomy and for neurosurgical planning. The digital nature of the data allows for the reconstruction of realistic three-dimensional models of an individual brain which can be sliced at arbitrary orientations for optimal visual inspection of often complex neuroanatomy and pathology. This is particularly relevant in the assessment of potential neuroanatomical correlates of temporal lobe epilepsy. Re-formatting of contiguous thinly sliced (1–2 mm thick) volumetric MRI data along planes parallel and perpendicular to the temporal plane allow finer visual discrimination and greater standardisation in qualitative procedures than previously possible. Perhaps more exciting are the applications of quantitative analysis where, for instance, accurate measurements of hippocampus and/or amygdala volumes provide important indicators of unilateral mesial temporal sclerosis which compare favourably with EEG and more invasive methods of lateralising the epileptogenic focus (Jack et al., 1990; Cascino et al., 1991; Watson et al, 1992; Cendes et al., 1993 a,b). For instance, by combining volumetric measurements of both hippocampus and amygdala, Cendes et al., (Chapter 9) quote correct lateralisation of focus in 93 of 100 temporal lobe epilepsy cases. The study of epilepsy arising from cortical abnormalities has been limited in the past by the difficulties of visualising the cortical surface from a set of conventional two-dimensional MRI slices. New acquisition techniques with gradient echo as opposed to spin echo techniques allow for an improved signal-to-noise ratio in thin slices in times compatible with clinical examinations. Whole brain coverage with thin slice data is now possible, such that partial volume effects are minimised with consequent improvements in fine detail. Numerous authors have reported dramatic improvements in the assessment of cortical dysplasia and grey matter heterotopias, particularly for more subtle abnormalities (Palmini et al., 1991a,b,c; Barkovich and Kjos, 1992a,b,c). The impact of this improved raw data when combined with new techniques for generating three-dimensional surface renderings in reasonably interactive circumstances is yet to be fully realised but initial experience is promising. At present, most studies have relied upon visual inspection to identify abnormalities in gyration on three-dimensional surface-rendered MRI. Such methods are quite acceptable for gross pathologies but, in a manner similar to mesial temporal volumetrics, the identification of more subtle distortions may require quantitative analysis of left/right differences and comparison of individual gyral surface area or gyral/sulcal locations with previously established population norms. Cook et al., (Chapter 47) have approached the problem by application of fractal analysis to two-dimensional MRI images from normal and frontal lobe epilepsy (FLE) patients. The grey-white matter interface was extracted by image processing procedures as a continuous contour and the fractal dimension, an index of contour complexity, derived. Results indicate that 10 of 16 FLE patients had a fractal index more than 3 standard deviations (3SD) below normal. In its present form, the method provides a non-specific indicator of cortical abnormality, yielding an overall index of complexity rather than identifying specific abnormalities, and is implemented in two dimensions rather than three dimensions. Nevertheless, it illustrates the potential of quantitative analysis for detecting aberrant cortical morphology. For a more directed analysis of cortical folding, a model of normal neuroanatomical variability, expressed in three-dimensional coordinates, is necessary. Keyserlingk and co-workers have developed methods for digitising sulcal patterns from post-mortem brains and constructed a map, with cuboid elements of 4 mm or 8 mm edge length, of major sulcal anatomy from 30 such brains (Keyserlingk et al., 1983, 1985, 1988; Niemann et al, 1988). The advent of high resolution MRI scanning offers finer spatial and contrast resolution in normal brain in vivo. At the Montreal Neurological Institute, MRI and PET imaging techniques are combined with three-dimensional graphics and computational analysis in the study of functional neuroanatomy of cognitive and sensorimotor processing. As part of this “brain mapping” programme, we have collected a database of over 300 MRI volumes from young normal subjects and are presently engaged in a series of projects whose long-term goal is the construction of a probabilistic description of normal neuroanatomy derived from high-resolution (1 mm thick slices) MRI data. In this chapter we briefly describe the current brain mapping environment at our institute and the current development of the MRI atlas project in both volumetric and surface domains.
Article
In macaque monkeys, the posterior parietal cortex (PPC) is concerned with the integration of multimodal information for constructing a spatial representation of the external world (in relation to the macaque's body or parts thereof), and planning and executing object-centred movements. The areas within the intraparietal sulcus (IPS), in particular, serve as interfaces between the perceptive and motor systems for controlling arm and eye movements in space. We review here the latest evidence for the existence of the IPS areas AIP (anterior intraparietal area), VIP (ventral intraparietal area), MIP (medial intraparietal area), LIP (lateral intraparietal area) and CIP (caudal intraparietal area) in macaques, and discuss putative human equivalents as assessed with functional magnetic resonance imaging. The data suggest that anterior parts of the IPS comprising areas AIP and VIP are relatively well preserved across species. By contrast, posterior areas such as area LIP and CIP have been found more medially in humans, possibly reflecting differences in the evolution of the dorsal visual stream and the inferior parietal lobule. Despite interspecies differences in the precise functional anatomy of the IPS areas, the functional relevance of this sulcus for visuomotor tasks comprising target selections for arm and eye movements, object manipulation and visuospatial attention is similar in humans and macaques, as is also suggested by studies of neurological deficits (apraxia, neglect, Bálint's syndrome) resulting from lesions to this region.
Article
Statistical parametric maps are spatially extended statistical processes that are used to test hypotheses about regionally specific effects in neuroimaging data. The most established sorts of statistical parametric maps (e.g., Friston et al. [1991]: J Cereb Blood Flow Metab 11:690–699; Worsley et al. [1992]: J Cereb Blood Flow Metab 12:900–918) are based on linear models, for example ANCOVA, correlation coefficients and t tests. In the sense that these examples are all special cases of the general linear model it should be possible to implement them (and many others) within a unified framework. We present here a general approach that accomodates most forms of experimental layout and ensuing analysis (designed experiments with fixed effects for factors, covariates and interaction of factors). This approach brings together two well established bodies of theory (the general linear model and the theory of Gaussian fields) to provide a complete and simple framework for the analysis of imaging data. The importance of this framework is twofold: (i) Conceptual and mathematical simplicity, in that the same small number of operational equations is used irrespective of the complexity of the experiment or nature of the statistical model and (ii) the generality of the framework provides for great latitude in experimental design and analysis.
Article
This technical note deals with a priori estimation of efficiency of functional magnetic resonance imaging (fMRI) designs. The efficiency of an estimator is a measure of how reliable it is and depends on error variance (the variance not modeled by explanatory variables in the design matrix) and the design variance (a function of the explanatory variables and the contrast tested). Changes in the experimental design can induce changes in the variance of estimated responses. This translates into changes in the standard error of the response estimate or equivalently into changes in efficiency. One consequence is that statistics, testing for the same activation in different contexts (i.e., experimental designs), can change substantially even if the activation and error variance are exactly the same. We demonstrate this effect using an event-related fMRI study of single word reading during blocked and randomized trial presentations. Furthermore, we show that the error variance can change with the experimental design. This highlights a problem with a priori comparison of efficiency for two or more experimental designs, which usually assumes identical error variance.
Article
The location of human area V5 (or MT) has been correlated with the intersection of the ascending limb of the inferior temporal sulcus (ALITS) and the lateral occipital sulcus (LO). This study was undertaken to attempt a replication and quantification of these observations using functional magnetic resonance imaging. V5 was significantly activated in 19 hemispheres with alternating, low contrast, random checkerboard patterns. We confirmed the stereotaxic location of V5 and were able to describe a fairly consistent sulcal pattern in the parieto-temporo-occipital cortex. V5 was usually (95%) buried within a sulcus, most commonly within the inferior temporal sulcus (ITS) (11%), the ascending limb of the ITS (ALITS) (53%) and the posterior continuation of the ITS (26%). The average distance from V5 of two identified anatomical landmarks of V5, the junctions of the LO and the ALITS, and the ITS and ALITS, were both ~1 cm. However, the LO–ALITS junction often had to be determined by interpolation (47%), and was not always present even with interpolation (21%). In contrast, the ITS–ALITS junction was always present and V5 was usually (90%) located in a sulcus intersecting with this junction, making it a more reliable landmark for localizing V5 with respect to gross morphological features on individual cortical surfaces.
Article
MOST verbal communication occurs in contexts where the listener can see the speaker as well as hear him. However, speech perception is normally regarded as a purely auditory process. The study reported here demonstrates a previously unrecognised influence of vision upon speech perception. It stems from an observation that, on being shown a film of a young woman's talking head, in which repeated utterances of the syllable [ba] had been dubbed on to lip movements for [ga], normal adults reported hearing [da]. With the reverse dubbing process, a majority reported hearing [bagba] or [gaba]. When these subjects listened to the soundtrack from the film, without visual input, or when they watched untreated film, they reported the syllables accurately as repetitions of [ba] or [ga]. Subsequent replications confirm the reliability of these findings; they have important implications for the understanding of speech perception.
Article
The visual receptive field physiology and anatomical connections of the lateral intraparietal area (area LIP), a visuomotor area in the lateral bank of the inferior parietal lobule, were investigated in the cynomolgus monkey ( Macaca fascicularis ). Afferent input and physiological properties of area 5 neurons in the medial bank of the intraparietal sulcus (i.e., area PEa) were also determined. Area LIP is composed of two myeloarchitectonic zones: a ventral zone (LIPv), which is densely myelinated, and a lightly myelinated dorsal zone (LIPd) adjacent to visual area 7a. Previous single‐unit recording studies in our laboratory have characterized visuomotor properties of area LIP neurons, including many neurons with powerful saccade‐related activity. In the first part of the present study, single‐unit recordings were used to map visual receptive fields from neurons in the two myeloarchitectonic zones of LIP. Receptive field size and eccentricity were compared to those in adjacent area 7a. The second part of the study investigated the cortico‐cortical connections of area LIP neurons using tritiated amino acid injections and fluorescent retrograde tracers placed directly into different rostrocaudal and dorsoventral parts of area LIP. The approach to area LIP was through somatosensory area 5, which eliminated the possibility of diffusion of tracers into area 7a. Unlike many area 7a receptive fields, which are large and bilateral, area LIP receptive fields were much smaller and exclusively confined to the contralateral visual field. In area LIP, an orderly progression in visual receptive fields was evident as the recording electrode moved tangentially to the cortical surface and through the depths of area LIP. The overall visual receptive field organization, however, yielded only a rough topography with some duplications in receptive field representation within a given rostrocaudal or dorsoventral part of LIP. The central visual field representation was generally located more dorsally and the peripheral visual field more ventrally within the sulcus. The lower visual field was represented more anteriorly and the upper visual field more posteriorly. In LIP, receptive field size increased with eccentricity but with much variability within the sample. Area LIPv was found to have reciprocal cortico‐cortical connections with many extrastriate visual areas, including the parieto‐occipital visual area PO; areas V3, V3A, and V4; the middle temporal area (MT); the middle superior temporal area (MST); dorsal prelunate area (DP); and area TEO (the occipital division of the intratemporal cortex). Area LIPv is also connected to area TF in the lateral posterior parahippocampal gyrus. Although area LIPd has many of the same cortico‐cortical connections as LIPv, some differences were apparent. Area LIPd and not LIPv has connections with visual areas TEa and TEm (anterior and medial divisions of the intratemporal cortex) and with multimodal area IPa (subdivision of association cortex in caudal bank of superior temporal cortex) in the superior temporal sulcus. A topographic relationship between rostrocaudal parts of area LIP (including both LIPv and LIPd) and the lateromedial parts of prefrontal cortex across areas 8a (frontal eye fields) and the medial portion of area 46 was also apparent. Intrinsic connections of LIP with other areas in the inferior parietal lobule included a feedforward projection to area 7a and connections with the bimodal ventral intraparietal area (VIP) as well as with somatosensory area 7b (PF). Some retrogradely labeled cells were seen in area 5, but this projection was not confirmed by control injections placed in the medial bank of the intraparietal sulcus (area PEa). An interesting observation was that the input into areas PEa and LIP from parieto‐occipital visual areas (medial dorsal parietal area (blDPl and area PO) was found to be topographically organized such that MDP and the dorsal part of PO project to area PEa, while ventral PO and a few MDP neurons project to the opposite bank in LIP. This “visual” input to area PEa was also seen in single‐unit recordings in area 5 in wlhich a small number of visually responsive cells were identified Le., 7 of 204 neurons). All remaining neurons mapped in area 5 were highly responsive to joint position, movement, and/or touch. These anatomical and physiological data demonstrate that area LIP is a unique visual area in posterior parietal cortex, with histological, anatomical, and physiological properties different from other areas in the inferior parietal lobule. Analysis of feedforward and feedback projections suggests that area LIP occupies a high position in the overall hierarchy of extrastriate visual processing areas in the macaque brain.
Article
To identify the cortical connections of the medial superior temporal (MST) and fundus of the superior temporal (FST) visual areas in the extrastriate cortex of the macaque, we injected multiple tracers, both anterograde and retrograde, in each of seven macaques under physiological control. We found that, in addition to connections with each other, both MST and FST have widespread connections with visual and polysensory areas in posterior prestriate, parietal, temporal, and frontal cortex. In prestriate cortex, both areas have connections with area V3A. MST alone has connections with the far peripheral field representations of V1 and V2, the parieto-occipital (PO) visual area, and the dorsal prelunate area (DP), whereas FST alone has connections with area V4 and the dorsal portion of area V3. Within the caudal superior temporal sulcus, both areas have extensive connections with the middle temporal area (MT), MST alone has connections with area PP, and FST alone has connections with area V4t. In the rostral superior temporal sulcus, both areas have extensive connections with the superior temporal polysensory area (STP) in the upper bank of the sulcus and with area IPa in the sulcal floor. FST also has connections with the cortex in the lower bank of the sulcus, involving area TEa. In the parietal cortex, both the central field representation of MST and FST have connections with the ventral intraparietal (VIP) and lateral intraparietal (LIP) areas, whereas MST alone has connections with the inferior parietal gyrus. In the temporal cortex, the central field representation of MST as well as FST has connections with visual area TEO and cytoarchitectonic area TF. In the frontal cortex, both MST and FST have connections with the frontal eye field.
Article
We have examined the origin and topography of cortical projections to area PO, an extrastriate visual area located in the parieto-occipital sulcus of the macaque. Distinguishable retrograde fluorescent tracers were injected into area PO at separate retinotopic loci identified by single-neuron recording. The results indicate that area PO receives retinotopically organized inputs from visual areas V1, V2, V3, V4, and MT. In each of these areas the projection to PO arises from the representation of the periphery of the visual field. This finding is consistent with neurophysiological data indicating that the representation of the periphery is emphasized in PO. Additional projections arise from area MST, the frontal eye fields, and several divisions of parietal cortex, including four zones within the intraparietal sulcus and a region on the medial dorsal surface of the hemisphere (MDP). On the basis of the laminar distribution of labeled cells we conclude that area PO receives an ascending input from V1, V2, and V3 and receives descending or lateral inputs from all other areas. Thus, area PO is at approximately the same level in the hierarchy of visual areas as areas V4 and MT. Area PO is connected both directly and indirectly, via MT and MST, to parietal cortex. Within parietal cortex, area PO is linked to particular regions of the intraparietal sulcus including VIP and LIP and two newly recognized zones termed here MIP and PIP. The wealth of connections with parietal cortex suggests that area PO provides a relatively direct route over which information concerning the visual field periphery can be transmitted from striate and prestriate cortex to parietal cortex. In contrast, area PO has few links with areas projecting to inferior temporal cortex. The pattern of connections revealed in this study is consistent with the view that area PO is primarily involved in visuospatial functioning.
Article
Injections of HRP-WGA in four cytoarchitectonic subdivisions of the posterior parietal cortex in rhesus monkeys allowed us to examine the major limbic and sensory afferent and efferent connections of each area. Area 7a (the caudal part of the posterior parietal lobe) is reciprocally interconnected with multiple visual-related areas: the superior temporal polysensory area (STP) in the upper bank of the superior temporal sulcus (STS), visual motion areas in the upper bank of STS, the dorsal prelunate gyrus, and portions of V2 and the parieto-occipital (PO) area. Area 7a is also heavily interconnected with limbic areas: the ventral posterior cingulate cortex, agranular retrosplenial cortex, caudomedial lobule, the parahippocampal gyrus, and the presubiculum. By contrast, the adjacent subdivision, area 7ip (within the posterior bank of the intraparietal sulcus), has few limbic connections but projects to and receives projections from widespread visual areas different than those that are connected with area 7a: the ventral bank and fundus of the STS including part of the STP cortex and the inferotemporal cortex (IT), areas MT (middle temporal) and possibly MTp (MT peripheral) and FST (fundal superior temporal) and portions of V2, V3v, V3d, V3A, V4, PO, and the inferior temporal (IT) convexity cortex. The connections between posterior parietal areas and visual areas located on the medial surface of the occipital and parieto-occipital cortex, containing peripheral representations of the visual field (V2, V3, PO), represent a major previously unrecognized source of visual inputs to the parietal association cortex. Area 7b (the rostral part of the posterior parietal lobe) was distinctive among parietal areas in its selective association with somatosensory-related areas: S1, S2, 5, the vestibular cortex, the insular cortex, and the supplementary somatosensory area (SSA). Like 7ip, area 7b had few limbic associations. Area 7m (on the medial posterior parietal cortex) has its own topographically distinct connections with the limbic (the posterior ventral bank of the cingulate sulcus, granular retrosplenial cortex, and presubiculum), visual (V2, PO, and the visual motion cortex in the upper bank of the STS), and somatosensory (SSA, and area 5) cortical areas. Each parietal subdivision is extensively interconnected with areas of the contralateral hemisphere, including both the homotopic cortex and widespread heterotopic areas. Indeed, each area is interconnected with as many areas of the contralateral hemisphere as it is within the ipsilateral one, though less intensively. This pattern of distribution allows for a remarkable degree of interhemispheric integration. These findings provide evidence that each major subdivision of posteriorparietal cortex has a unique set of reciprocal connections with limbic and sen-sory areas in both hemispheres. For the most part, each parietalsubdivision, rather than being a site of multimodal convergence, receives input from only one sensory modality, though often from different channels of information within that modality. For example, the two streams of visual information tra-ditionally linked to pattern and motion seem to converge in both areas 7a and 7ip. The areal parcellation of parietal cortex byits afferent and efferent con-nections provides an anatomical foundation for a parallel processing model of higher cortical functions.
Article
Parietotectal projections were studied in the macaque monkey in experiments designed to compare the distribution of fibers originating in two cytoarchitecturally distinct regions within the inferior parietal lobule: the inferior bank of the intraparietal sulcus (area POa of Seltzer and Pandya, '80) and the adjoining part of area PG (von Bonin and Bailey, '47) on the convexity of the hemisphere, here called PGc. A dense fiber projection from POa to the intermediate and deep layers of the superior colliculus was observed by both anterograde autoradiographic and anterograde horseradish peroxidase methods. In contrast, only faint labeling was seen in the superior colliculus following injections of tritiated amino acids into area PGc on the convexity. In a second set of experiments, injections of horseradish peroxidase were placed in the intermediate and deep layers of the superior colliculus so that the cells of origin of the parietotectal projections could be identified. Many retrogradely labeled neurons were observed in POa, whereas very few labeled neurons were present in any other subdivision of the inferior parietal lobule or in the superior parietal lobule. These findings demonstrate that area POa has a prominent direct efferent projection to a major premotor region of the brainstem oculomotor system and suggest that by virtue of its parietotectal connection, this sulcal subdivision may have functional properties not shared with other subdivisions of the inferior parietal lobule.
Article
The need for a simply applied quantitative assessment of handedness is discussed and some previous forms reviewed. An inventory of 20 items with a set of instructions and response- and computational-conventions is proposed and the results obtained from a young adult population numbering some 1100 individuals are reported. The separate items are examined from the point of view of sex, cultural and socio-economic factors which might appertain to them and also of their inter-relationship to each other and to the measure computed from them all. Criteria derived from these considerations are then applied to eliminate 10 of the original 20 items and the results recomputed to provide frequency-distribution and cumulative frequency functions and a revised item-analysis. The difference of incidence of handedness between the sexes is discussed.
Article
By means of autoradiographic and ablation-degeneration techniques, the intrinsic cortical connections of the posterior parietal cortex in the rhesus monkey were traced and correlated with a reappraisal of cerebral architectonics. Two major rostral-to-caudal connectional sequences exist. One begins in the dorsal postcentral gyrus (area 2) and proceeds, through architectonic divisions of the superior parietal lobule (areas PE and PEc), to a cortical region on the medial surface of the parietal lobe (area PGm). This area has architectonic features similar to those of the caudal inferior parietal lobule (area PG). The second sequence begins in the ventral post/central gyrus (area 2) and passes through the rostral inferior parietal lobule (areas PG and PFG) to reach the caudal inferior parietal lobule (area PG). Both the superior parietal lobule and the rostral inferior parietal lobule also send projections to various other zones located in the parietal opercular region, the intraparietal sulcus, and the caudalmost portion of the cingulate sulcus. Areas PGm and PG, on the other hand, project to each other, to the cingulate region, to the caudalmost portion of the superior temporal gyrus, and to the upper bank of the superior temporal sulcus. Finally, a reciprocal sequence of connections, directed from caudal to rostral, links together many of the above-mentioned parietal zones. With regard to the laminar pattern of termination, the rostral-to-caudal connections are primarily distributed in the form of cortical "columns" while the caudal-to-rostral connections are found mainly over the first cortical cell layer.
Article
It is a familiar experience that we tend to close our eyes or divert our gaze when concentrating attention on cognitively demanding tasks. We report on the brain activity correlates of directing attention away from potentially competing visual processing and toward processing in another sensory modality. Results are reported from a series of positron-emission tomography studies of the human brain engaged in somatosensory tasks, in both "eyes open" and "eyes closed" conditions. During these tasks, there was a significant decrease in the regional cerebral blood flow in the visual cortex, which occurred irrespective of whether subjects had to close their eyes or were instructed to keep their eyes open. These task-related deactivations of the association areas belonging to the nonrelevant sensory modality were interpreted as being due to decreased metabolic activity. Previous research has clearly demonstrated selective activation of cortical regions involved in attention-demanding modality-specific tasks; however, the other side of this story appears to be one of selective deactivation of unattended areas.
Article
1. The middle temporal area (MT) projects to the intraparietal sulcus in the macaque monkey. We describe here a discrete area in the depths of the intraparietal sulcus containing neurons with response properties similar to those reported for area MT. We call this area the physiologically defined ventral intraparietal area, or VIP. In the present study we recorded from single neurons in VIP of alert monkeys and studied their visual and oculomotor response properties. 2. Area VIP has a high degree of selectivity for the direction of a moving stimulus. In our sample 72/88 (80%) neurons responded at least twice as well to a stimulus moving in the preferred direction compared with a stimulus moving in the null direction. The average response to stimuli moving in the preferred direction was 9.5 times as strong as the response to stimuli moving in the opposite direction, as compared with 10.9 times as strong for neurons in area MT. 3. Many neurons were also selective for speed of stimulus motion. Quantitative data from 25 neurons indicated that the distribution of preferred speeds ranged from 10 to 320 degrees/s. The degree of speed tuning was on average twice as broad as that reported for area MT. 4. Some neurons (22/41) were selective for the distance at which a stimulus was presented, preferring a stimulus of equivalent visual angle and luminance presented near (within 20 cm) or very near (within 5 cm) the face. These neurons maintained their preference for near stimuli when tested monocularly, suggesting that visual cues other than disparity can support this response. These neurons typically could not be driven by small spots presented on the tangent screen (at 57 cm). 5. Some VIP neurons responded best to a stimulus moving toward the animal. The absolute direction of visual motion was not as important for these cells as the trajectory of the stimulus: the best stimulus was one moving toward a particular point on the face from any direction. 6. VIP neurons were not active in relation to saccadic eye movements. Some neurons (10/17) were active during smooth pursuit of a small target. 7. The predominance of direction and speed selectivity in area VIP suggests that it, like other visual areas in the dorsal stream, may be involved in the analysis of visual motion.
Article
1. Although a representation of multisensory space is contained in the superior colliculus, little is known about the spatial requirements of multisensory stimuli that influence the activity of neurons here. Critical to this problem is an assessment of the registry of the different receptive fields within individual multisensory neurons. The present study was initiated to determine how closely the receptive fields of individual multisensory neurons are aligned, the physiological role of that alignment, and the possible functional consequences of inducing receptive-field misalignment. 2. Individual multisensory neurons in the superior colliculus of anesthetized, paralyzed cats were studied with the use of standard extracellular recording techniques. The receptive fields of multisensory neurons were large, as reported previously, but exhibited a surprisingly high degree of spatial coincidence. The average proportion of receptive-field overlap was 86% for the population of visual-auditory neurons sampled. 3. Because of this high degree of intersensory receptive-field correspondence, combined-modality stimuli that were coincident in space tended to fall within the excitatory regions of the receptive fields involved. The result was a significantly enhanced neuronal response in 88% of the multisensory neurons studied. If stimuli were spatially disparate, so that one fell outside its receptive field, either a decreased response occurred (56%), or no intersensory effect was apparent (44%). 4. The normal alignment of the different receptive fields of a multisensory neuron could be disrupted by passively displacing the eyes, pinnae, or limbs/body. In no case was a shift in location or size observed in a neuron's other receptive field(s) to compensate for this displacement. The physiological result of receptive-field misalignment was predictable and based on the location of the stimuli relative to the new positions of their respective receptive fields. Now, for example, one component of a spatially coincident pair of stimuli might fall outside its receptive field and inhibit the other's effects. 5. These data underscore the dependence of multisensory integrative responses on the relationship of the different stimuli to their corresponding receptive fields rather than to the spatial relationship of the stimuli to one another. Apparently, the alignment of different receptive fields for individual multisensory neurons ensures that responses to combinations of stimuli derived from the same event are integrated to increase the salience of that event. Therefore the maintenance of receptive-field alignment is critical for the appropriate integration of converging sensory signals and, ultimately, elicitation of adaptive behaviors.
Article
1. The properties of visual-, auditory-, and somatosensory-responsive neurons, as well as of neurons responsive to multiple sensory cues (i.e., multisensory), were examined in the superior colliculus of the rhesus monkey. Although superficial layer neurons responded exclusively to visual stimuli and visual inputs predominated in deeper layers, there was also a rich nonvisual and multisensory representation in the superior colliculus. More than a quarter (27.8%) of the deep layer population responded to stimuli from more than a single sensory modality. In contrast, 37% responded only to visual cues, 17.6% to auditory cues, and 17.6% to somatosensory cues. Unimodal- and multisensory-responsive neurons were clustered by modality. Each of these modalities was represented in map-like fashion, and the different representations were in alignment with one another. 2. Most deep layer visually responsive neurons were binocular and exhibited poor selectivity for such stimulus characteristics as orientation, velocity, and direction of movement. Similarly, most auditory-responsive neurons had contralateral receptive fields and were binaural, but had little frequency selectivity and preferred complex, broad-band sounds. Somatosensory-responsive neurons were overwhelmingly contralateral, high velocity, and rapidly adapting. Only rarely did somatosensory-responsive neurons require distortion of subcutaneous tissue for activation. 3. The spatial congruence among the different receptive fields of multisensory neurons was a critical feature underlying their ability to synthesize cross-modal information. 4. Combinations of stimuli could have very different consequences in the same neuron, depending on their temporal and spatial relationships. Generally, multisensory interactions were evident when pairs of stimuli were separated from one another by < 500 ms, and the products of these interactions far exceeded the sum of their unimodal components. Whether the combination of stimuli produced response enhancement, response depression, or no interaction depended on the location of the stimuli relative to one another and to their respective receptive fields. Maximal response enhancements were observed when stimuli originated from similar locations in space (as when derived from the same event) because they fell within the excitatory receptive fields of the same multisensory neurons. If, however, the stimuli were spatially disparate such that one fell beyond the excitatory borders of its receptive field, either no interaction was produced or this stimulus depressed the effectiveness of the other. Furthermore, maximal response interactions were seen with the pairing of weakly effective unimodal stimuli. As the individual unimodal stimuli became increasingly effective, the levels of response enhancement to stimulus combinations declined, a principle referred to as inverse effectiveness. Many of the integrative principles seen here in the primate superior colliculus are strikingly similar to those observed in the cat. These observations indicate that a set of common principles of multisensory integration is adaptable in widely divergent species living in very different ecological situations. 5. Surprisingly, a few multisensory neurons had individual receptive fields that were not in register with one another. This has not been noted in multisensory neurons of other species, and these "anomalous" receptive fields could present a daunting problem: stimuli originating from the same general location in space cannot simultaneously fall within their respective receptive fields, a stimulus pairing that may result in response depression. Conversely, stimuli that originate from separate events and disparate locations (and fall within their receptive fields) may result in response enhancement. However, the spatial principle of multisensory integration did not apply in these cases. (ABSTRACT TRUNCATED)
Article
The cortical connections of visual area 3 (V3) and the ventral posterior area (VP) in the macaque monkey were studied by using combinations of retrograde and anterograde tracers. Tracer injections were made into V3 or VP following electrophysiological recording in and near the target area. The pattern of ipsilateral cortical connections was analyzed in relation to the pattern of interhemispheric connections identified after transection of the corpus callosum. Both V3 and VP have major connections with areas V2, V3A, posterior intraparietal area (PIP), V4, middle temporal area (MT), medial superior temporal area (dorsal) (MSTd), and ventral intraparietal area (VIP). Their connections differ in several respects. Specifically, V3 has connections with areas V1 and V4 transitional area (V4t) that are absent for VP; VP has connections with areas ventral occipitotemporal area (VOT), dorsal prelunate area (DP), and visually responsive portion of temporal visual area F (VTF) that are absent or occur only rarely for V3. The laminar pattern of labelled terminals and retrogradely labeled cell bodies allowed assessment of the hierarchical relationships between areas V3 and VP and their various targets. Areas V1 and V2 are at a lower level than V3 and VP; all of the remaining areas are at a higher level. V3 receives major inputs from layer 4B of V1, suggesting an association with the magnocellular-dominated processing stream and a role in routing magnocellular-dominated information along pathways leading to both parietal and temporal lobes. The convergence and divergence of pathways involving V3 and VP underscores the distributed nature of hierarchical processing in the visual system.
Article
This paper is about detecting activations in statistical parametric maps and considers the relative sensitivity of a nested hierarchy of tests that we have framed in terms of the level of inference (voxel level, cluster level, and set level). These tests are based on the probability of obtaining c, or more, clusters with k, or more, voxels, above a threshold u. This probability has a reasonably simple form and is derived using distributional approximations from the theory of Gaussian fields. The most important contribution of this work is the notion of set-level inference. Set-level inference refers to the statistical inference that the number of clusters comprising an observed activation profile is highly unlikely to have occurred by chance. This inference pertains to the set of activations reaching criteria and represents a new way of assigning P values to distributed effects. Cluster-level inferences are a special case of set-level inferences, which obtain when the number of clusters c = 1. Similarly voxel-level inferences are special cases of cluster-level inferences that result when the cluster can be very small (i.e., k = 0). Using a theoretical power analysis of distributed activations, we observed that set-level inferences are generally more powerful than cluster-level inferences and that cluster-level inferences are generally more powerful than voxel-level inferences. The price paid for this increased sensitivity is reduced localizing power: Voxel-level tests permit individual voxels to be identified as significant, whereas cluster-and set-level inferences only allow clusters or sets of clusters to be so identified. For all levels of inference the spatial size of the underlying signal f (relative to resolution) determines the most powerful thresholds to adopt. For set-level inferences if f is large (e.g., fMRI) then the optimum extent threshold should be greater than the expected number of voxels for each cluster. If f is small (e.g., PET) the extent threshold should be small. We envisage that set-level inferences will find a role in making statistical inferences about distributed activations, particularly in fMRI.
Article
In a previous report, we described the visual response properties in the ventral intraparietal area (area VIP) of the awake macaque. Here we describe the somatosensory response properties in area VIP and the patterns of correspondence between the responses of single neurons to independently administered tactile and visual stimulation. VIP neurons responded to visual stimulation only or to visual and tactile stimulation. Of 218 neurons tested, 153 (70%) were bimodal in the sense that they responded to stimuli that were independently applied in either sensory modality. Unimodal visual and bimodal neurons were intermingled within the recording area and could not be distinguished on the basis of their visual response properties alone. Most of the cells with a tactile receptive field (RF) responded well to light touch or air puffs. The distribution of RF locations principally emphasized the head (85%), with approximately equivalent representations of the upper and lower face areas. The tactile and visual RFs were aligned in a congruent manner, with the intersection of the visual vertical and horizontal meridian having its tactile counterpart in the nose/mouth area. Small foveal visual RFs were paired with small tactile RFs on the muzzle, and peripheral visual RFs were associated with tactile RFs on the side of the head or body. Most cells showed a strong sensitivity to moving stimuli, and the preferred directions of visual and tactile motion coincided in 85% of bimodal cells. In some cases, bimodal responses patterns were complementary: cells responding to motion in depth toward the monkey had responses, whereas cells responding to motion in depth away form the monkey had responses. Other forms of bimodal response congruence included orientation selectivity, and , , and / response types. The large proportion of bimodal tactile and visual neurons with congruent response properties in area VIP indicates that there are important functional differences between area VIP and other dorsal stream areas involved in the analysis of motion. We suggest that VIP is involved in the construction of a multisensory, head-centered representation of near extrapersonal space.
Article
Positron emission tomography was used to compare the functional anatomy of visual imagination and generation of movement. Subjects were asked to generate visual images of their finger movement in response to a preparatory signal. Four conditions were tested: in two, no actual movement was required; in the other two, a second signal prompted the subjects to execute the imagined movement. Which movement to imagine was either specified by the preparatory stimulus or freely selected by the subjects. Compared with a rest condition, tasks involving only imagination activated several cortical regions (inferoparietal cortex, presupplementary motor area, anterior cingulate cortex, premotor cortex, dorsolateral prefrontal cortex) contralateral to the imagined movement. Tasks involving both imagination and movement additionally increased activity in the ipsilateral cerebellum, thalamus, contralateral anteroparietal, and motor cortex and decreased activity in the inferior frontal cortex. These results support the hypothesis that distinct functional systems are involved in visuomotor imagination and generation of simple finger movements: associative parietofrontal areas are primarily related to visuomotor imagination, with inferior frontal cortex likely engaged in active motor suppression, and primary motor structures contribute mainly to movement execution.
Article
A series of recent anatomical and functional data has radically changed our view on the organization of the motor cortex in primates. In the present article we present this view and discuss its fundamental principles. The basic principles are the following: (a) the motor cortex, defined as the agranular frontal cortex, is formed by a mosaic of separate areas, each of which contains an independent body movement representation, (b) each motor area plays a specific role in motor control, based on the specificity of its cortical afferents and descending projections, (c) in analogy to the motor cortex, the posterior parietal cortex is formed by a multiplicity of areas, each of which is involved in the analysis of particular aspects of sensory information. There are no such things as multipurpose areas for space or body schema and (d) the parieto-frontal connections form a series of segregated anatomical circuits devoted to specific sensorimotor transformations. These circuits transform sensory information into action. They represent the basic functional units of the motor system. Although these conclusions mostly derive from monkey experiments, anatomical and brain-imaging evidence suggest that the organization of human motor cortex is based on the same principles. Possible homologies between the motor cortices of humans and non-human primates are discussed.
Article
This report addresses the connectivity of the cortex occupying middle to dorsal levels of the anterior bank of the parieto-occipital sulcus in the macaque monkey. We have previously referred to this territory, whose perimeter is roughly circumscribed by the distribution of interhemispheric callosal fibres, as area V6, or the ‘V6 complex'. Following injections of wheatgerm agglutinin conjugated to horseradish peroxidase (WGA-HRP) into this region, we examined the laminar organization of labelled cells and axonal terminals to attain indications of relative hierarchical status among the network of connected areas. A notable transition in the laminar patterns of the local, intrinsic connections prompted a sub-designation of the V6 complex itself into two separate areas, V6 and V6A, with area V6A lying dorsal, or dorsomedial to V6 proper. V6 receives ascending input from V2 and V3, ranks equal to V3A and V5, and provides an ascending input to V6A at the level above. V6A is not connected to area V2 and in general is less heavily linked to the earliest visual areas; in other respects, the two parts of the V6 complex share similar spheres of connectivity. These include regions of peripheral representation in prestriate areas V3, V3A and V5, parietal visual areas V5A/MST and 7a, other regions of visuo-somatosensory association cortex within the intraparietal sulcus and on the medial surface of the hemisphere, and the premotor cortex. Subcortical connections include the medial and lateral pulvinar, caudate nucleus, claustrum, middle and deep layers of the superior colliculus and pontine nuclei.
Article
The ability to integrate information from different sensory systems is a fundamental characteristic of the brain. Because different bits of information are derived from different sensory channels, their synthesis markedly enhances the detection and identification of external stimuli. The neural substrate for such "multisensory" integration is provided by neurons that receive convergent input from two or more sensory modalities. Many such multisensory neurons are found in the superior colliculus (SC), a midbrain structure that plays a significant role in overt attentive and orientation behaviors. The various principles governing the integration of visual, auditory, and somatosensory inputs in SC neurons have been explored in several species. Thus far, the evidence suggests a remarkable conservation of integrative features during vertebrate evolution. One of the most robust of these principles is based on spatial relationships: a striking enhancement in activity is induced in a multisensory neuron when two different sensory stimuli (e.g., visual and auditory) are in spatial concordance, whereas a profound response depression can be induced when these cues are spatially discordant. The most extensive physiological observations have been made in cat, and in this species the same principles that have been shown to govern multisensory integration at the level of the individual SC neuron have also been shown to govern overt attentive and orientation responses to multisensory stimuli. Most surprising, however, is the critical role played by association (i.e. anterior ectosylvian) cortex in facilitating these midbrain processes. In the absence of the modulating corticotectal influences, multisensory SC neurons in cat are unable to integrate the different sensory cues converging upon them in an adult-like fashion, and are unable to mediate overt multisensory behaviors. This situation appears quite similar to that observed during early postnatal life. When multisensory SC neurons first appear, they are able to respond to multiple sensory inputs but are unable to synthesize these inputs to significantly enhance or degrade their responses. During ontogeny, individual multisensory neurons develop this capacity abruptly, but at very different ages, until the mature population condition is reached after several postnatal months. It appears likely that the abrupt onset of this capacity in any individual SC neuron reflects the maturation of inputs from anterior ectosylvian cortex. Presumably, the functional coupling of cortex with an individual SC neuron is essential to initiate and maintain that neuron's capability for multisensory integration throughout its life.
Article
Superior area 6 of the macaque monkey frontal cortex is formed by two cytoarchitectonic areas: F2 and F7. In the present experiment, we studied the input from the superior parietal lobule (SPL) to these areas by injecting retrograde neural tracers into restricted parts of F2 and F7. Additional injections of retrograde tracers were made into the spinal cord to define the origin of corticospinal projections from the SPL. The results are as follows: 1) The part of F2 located around the superior precentral dimple (F2 dimple region) receives its main input from areas PEc and PEip (PE intraparietal, the rostral part of area PEa of Pandya and Seltzer, [1982] J. Comp. Neurol. 204:196-210). Area PEip was defined as that part of area PEa that is the source of corticospinal projections. 2) The ventrorostral part of F2 is the target of strong projections from the medial intraparietal area (area MIP) and from the dorsal part of the anterior wall of the parietooccipital sulcus (area V6A). 3) The ventral and caudal parts of F7 receive their main parietal input from the cytoarchitectonic area PGm of the SPL and from the posterior cingulate cortex. 4) The dorsorostral part of F7, which is also known as the supplementary eye field, is not a target of the SPL, but it receives mostly afferents from the inferior parietal lobule and from the temporal cortex. It is concluded that at least three separate parietofrontal circuits link the superior parietal lobule with the superior area 6. Considering the functional properties of the areas that form these circuits, it is proposed that the PEc/PEip-F2 dimple region circuit is involved in controlling movements on the basis of somatosensory information, which is the traditional role proposed for the whole dorsal premotor cortex. The two remaining circuits appear to be involved in different aspects of visuomotor transformations.
Article
In fMRI there are two classes of inference: one aims to make a comment about the "typical" characteristics of a population, and the other about "average" characteristics. The first pertains to studies of normal subjects that try to identify some qualitative aspect of normal functional anatomy. The second class necessarily applies to clinical neuroscience studies that want to make an inference about quantitative differences of a regionally specific nature. The first class of inferences is adequately serviced by conjunction analyses and fixed-effects models with relatively small numbers of subjects. The second requires random-effect analyses and larger cohorts.
Article
A 73-year old man showed visual and tactile agnosia following bilateral haemorrhagic stroke. Tactile agnosia was present in both hands, as shown by his impaired recognition of objects, geometrical shapes, letters and nonsense shapes. Basic somatosensory functions and the appreciation of substance qualities (hylognosis) were preserved. The patient's inability to identify the stimulus shape (morphagnosia) was associated with a striking impairment in detecting the orientation of a line or a rod in two- and three-dimensional space. This spatial deficit was thought to underlie morphagnosia, since in the tactile modality form recognition is built upon the integration of the successive changes of orientation in space made by the hand as it explores the stimulus. Indirect support for this hypothesis was provided by the location of the lesions, which could not account for the severe impairment of both hands. Only those located in the right hemisphere encroached upon the posterior parietal cortex, which is the region assumed to be specialised in shape recognition. The left hemisphere damage spared the corresponding area and could not, therefore, be held responsible for the right hand tactile agnosia. We submit that tactile agnosia can result from the disruption of two discrete mechanisms and has different features. It may arise from a parietal lesion damaging the high level processing of somatosensory information that culminates in the structured description of the object. In this case, tactile recognition is impaired in the hand contralateral to the side of the lesion. Alternatively, it may be caused by a profound derangement of spatial skills, particularly those involved in detecting the orientation in space of lines, segments and complex patterns. This deficit results in morphagnosia, which affects both hands to the same degree.