Article

Unitary haptic perception: Integrating moving tactile inputs from anatomically adjacent and non-adjacent digits

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

How do we achieve unitary perception of an object when it touches two parts of the sensory epithelium that are not contiguous? We investigated this problem with a simple psychophysical task, which we then used in an fMRI experiment. Two wooden rods were moved over two digits positioned to be spatially adjacent. The digits were either from one foot (or hand) or one digit was from either foot (or hand). When the rods were moving in phase, one object was reliably perceived. By contrast, when the rods were moving out of phase, two objects were reliably perceived. fMRI revealed four cortical areas where activity was higher when the moving rods were perceived as one object relative to when they were perceived as two separate objects. Areas in the right inferior parietal lobule, the left inferior temporal sulcus and the left middle frontal gyrus were activated for this contrast regardless of the anatomical configuration of the stimulated sensory epithelia. By contrast, the left intraparietal sulcus was activated specifically when integration across the midline was required, irrespective of whether the stimulation was applied to the hands or feet. These results reveal a network of brain areas involved in generating a unified percept of the presence of an object that comes into contact with different parts of the body surface.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The critical difference between periodic and non-periodic surfaces is that dots of periodic surfaces constitute lines orthogonal to the scan direction, and each line of dots stimulates the fingertip simultaneously. Dots that simultaneously stimulate the finger can be perceptually grouped together 39,40 . Simultaneous stimulation at separate points on the skin can be perceived as a single, more intense stimulus at the central location (tactile funnelling illusion) 41 . ...
... For instance, Wacker et al. (2011) observed that patterned stimulation produced by dot arrays on the finger activates the posterior parietal lobule and the postcentral gyrus more than randomized stimulation using the same dot arrays 17 . In-phase object motion across two adjacent fingers activates the posterior parietal lobule to a greater extent than out-of-phase motion across the same finger 39,40 . Accordingly, it is possible that the sensory inputs of simultaneous dot stimulation are grouped together in the posterior parietal lobule as well as in the somatosensory cortices. ...
Article
Full-text available
Humans are able to judge the speed of an object's motion by touch. Research has suggested that tactile judgment of speed is influenced by physical properties of the moving object, though the neural mechanisms underlying this process remain poorly understood. In the present study, functional magnetic resonance imaging was used to investigate brain networks that may be involved in tactile speed classification and how such networks may be affected by an object's texture. Participants were asked to classify the speed of 2-D raised dot patterns passing under their right middle finger. Activity in the parietal operculum, insula, and inferior and superior frontal gyri was positively related to the motion speed of dot patterns. Activity in the postcentral gyrus and superior parietal lobule was sensitive to dot periodicity. Psycho-physiological interaction (PPI) analysis revealed that dot periodicity modulated functional connectivity between the parietal operculum (related to speed) and postcentral gyrus (related to dot periodicity). These results suggest that texture-sensitive activity in the primary somatosensory cortex and superior parietal lobule influences brain networks associated with tactually-extracted motion speed. Such effects may be related to the influence of surface texture on tactile speed judgment.
... Our findings show for the first time haptic grouping by similarity in roughness. This is in line with earlier results that grouping by similarity is operational in the haptic modality: grouping effects have been shown for similarity of line orientation (Overvliet et al. 2012), similarity in movement direction (Peelen et al. 2010) and similarity in textures (Chang et al. 2007b). In general, grouping principles seem to apply to the haptic modality as they do in the auditory and visual modalities; for example, movability and height increase the strength of a single object percept (Pawluk et al. 2010), and grouping also seems to aid haptic enumeration Verlaers et al. 2015). ...
... This is in line with earlier results that grouping by similarity is operational in the haptic modality: grouping effects have been shown for similarity of line orientation (Overvliet et al. 2012), similarity in movement direction (Peelen et al. 2010) and similarity in textures (Chang et al. 2007b). In general, grouping principles seem to apply to the haptic modality as they do in the auditory and visual modalities; for example, movability and height increase the strength of a single object percept (Pawluk et al. 2010), and grouping also seems to aid haptic enumeration Verlaers et al. 2015). An overview of many more studies that are (post hoc interpreted to be) influenced by perceptual grouping can be found in Spence (2011, 2014). ...
Article
Full-text available
We investigated grouping by similarity of surface roughness in the context of task difficulty. We hypothesized that grouping yields a larger benefit at higher levels of task complexity, because efficient processing is more helpful when more cognitive resources are needed to execute a task. Participants searched for a patch of a different roughness as compared to the distractors in two strips of similar or dissimilar roughness values. We reasoned that if the distractors could be grouped based on similar roughness values, exploration time would be shorter and fewer errors would occur. To manipulate task complexity, we varied task difficulty (high target saliency equalling low task difficulty), and we varied the fingers used to explore the display (two fingers of one hand being more cognitive demanding than two fingers of opposite hands). We found much better performance in the easy condition as compared to the difficult condition (in both error rates and mean search slopes). Moreover, we found a larger effect for the similarity manipulation in the difficult condition as compared to the easy condition. Within the difficult condition, we found a larger effect for the one-hand condition as compared to the two-hand condition. These results show that haptic search is accelerated by the use of grouping by similarity of surface roughness, especially when the task is relatively complex. We conclude that the effect of perceptual grouping is more prominent when more cognitive resources are needed to perform a task.
... LOTC tool might store such properties in a relatively abstract multimodal code, perhaps integrating information from multiple modalities (Beauchamp, Lee, Argall, & Martin, 2004). Indeed, there is evidence that regions in LOTC are involved in extracting object shape from tactile input (Peelen, Rogers, Wing, Downing, & Bracewell, 2010;Amedi, von Kriegstein, van Atteveldt, Beauchamp, & Naumer, 2005;Reed, Shoham, & Halgren, 2004;Amedi, Jacobson, Hendler, Malach, & Zohary, 2002;Amedi, Malach, Hendler, Peled, & Zohary, 2001). Furthermore, motion-selective area hMT + , located just posterior to LOTC tool, responds to auditory motion, particularly in early blind individuals (Bedny et al., 2010;Saenz, Lewis, Huth, Fine, & Koch, 2008;Ricciardi et al., 2007;Poirier et al., 2005Poirier et al., , 2006. ...
Article
Previous studies have provided evidence for a tool-selective region in left lateral occipitotemporal cortex (LOTC). This region responds selectively to pictures of tools and to characteristic visual tool motion. The present human fMRI study tested whether visual experience is required for the development of tool-selective responses in left LOTC. Words referring to tools, animals, and nonmanipulable objects were presented auditorily to 14 congenitally blind and 16 sighted participants. Sighted participants additionally viewed pictures of these objects. In whole-brain group analyses, sighted participants showed tool-selective activity in left LOTC in both visual and auditory tasks. Importantly, virtually identical tool-selective LOTC activity was found in the congenitally blind group performing the auditory task. Furthermore, both groups showed equally strong tool-selective activity for auditory stimuli in a tool-selective LOTC region defined by the picture-viewing task in the sighted group. Detailed analyses in individual participants showed significant tool-selective LOTC activity in 13 of 14 blind participants and 14 of 16 sighted participants. The strength and anatomical location of this activity were indistinguishable across groups. Finally, both blind and sighted groups showed significant resting state functional connectivity between left LOTC and a bilateral frontoparietal network. Together, these results indicate that tool-selective activity in left LOTC develops without ever having seen a tool or its motion. This finding puts constraints on the possible role that this region could have in tool processing and, more generally, provides new insights into the principles shaping the functional organization of OTC.
... This may explain why we found EBA, but not FBA activation during haptic perception of the human body. Indeed, during haptic exploration we can only perceive small parts of objects, faces, or body parts, while the entire shape can be fully extracted only by integrating over time the information acquired at different instants (Kitada, Kochiyama, Hashimoto, Naito, & Matsumura, 2003; Peelen, Rogers, Wing, Downing, & Bracewell, 2010 ). Studies on haptic object exploration have shown that object processing is feature-based in the initial stages, while a more global representation emerges only when more time is allowed for manual exploration (Lakatos & Marks, 1999 ). ...
Article
Abstract I reviewed recent studies on the somatosensory cortices, both neurophysiological studies in animals and studies on humans using neuroimaging techniques. The topics reviewed include 1) hierarchical information processing in the digit region of the somatosensory cortex, 2) integration of information from both sides of the body, 3) influence of vision on somatosensory activity, and 4) pain and somatosensory cortex. Functional implications of new findings were discussed.
Article
Full-text available
Knowledge of object shape is primarily acquired through the visual modality but can also be acquired through other sensory modalities. In the present study, we investigated the representation of object shape in humans without visual experience. Congenitally blind and sighted participants rated the shape similarity of pairs of 33 familiar objects, referred to by their names. The resulting shape similarity matrices were highly similar for the two groups, indicating that knowledge of the objects' shapes was largely independent of visual experience. Using fMRI, we tested for brain regions that represented object shape knowledge in blind and sighted participants. Multivoxel activity patterns were established for each of the 33 aurally presented object names. Sighted participants additionally viewed pictures of these objects. Using representational similarity analysis, neural similarity matrices were related to the behavioral shape similarity matrices. Results showed that activity patterns in occipitotemporal cortex (OTC) regions, including inferior temporal (IT) cortex and functionally defined object-selective cortex (OSC), reflected the behavioral shape similarity ratings in both blind and sighted groups, also when controlling for the objects' tactile and semantic similarity. Furthermore, neural similarity matrices of IT and OSC showed similarities across blind and sighted groups (within the auditory modality) and across modality (within the sighted group), but not across both modality and group (blind auditory-sighted visual). Together, these findings provide evidence that OTC not only represents objects visually (requiring visual experience) but also represents objects nonvisually, reflecting knowledge of object shape independently of the modality through which this knowledge was acquired.
Article
The limits of generalization of our 3-D shape recognition system to identifying objects by touch was investigated by testing exploration at unusual locations and using untrained effectors. In Experiments 1 and 2, people found identification by hand of real objects, plastic 3-D models of objects, and raised line drawings placed in front of themselves no easier than when exploration was behind their back. Experiment 3 compared one-handed, two-handed, one-footed, and two-footed haptic object recognition of familiar objects. Recognition by foot was slower (7 vs. 13 s) and much less accurate (9 % vs. 47 % errors) than recognition by either one or both hands. Nevertheless, item difficulty was similar across hand and foot exploration, and there was a strong correlation between an individual's hand and foot performance. Furthermore, foot recognition was better with the largest 20 of the 80 items (32 % errors), suggesting that physical limitations hampered exploration by foot. Thus, object recognition by hand generalized efficiently across the spatial location of stimuli, while object recognition by foot seemed surprisingly good given that no prior training was provided. Active touch (haptics) thus efficiently extracts 3-D shape information and accesses stored representations of familiar objects from novel modes of input.
Article
Full-text available
In a series of experimental investigations of a subject with a unilateral impairment of tactile object recognition without impaired tactile sensation, several issues were addressed. First, is tactile agnosia secondary to a general impairment of spatial cognition ? On tests of spatial ability, including those directed at the same spatial integration process assumed to be taxed by tactile object recognition, the subject performed well, implying a more specific impairment of high level, modality specific tactile perception. Secondly, within the realm of high level tactile perception, is there a distinction between the ability to derive shape ('what') and spatial ('where') information? Our testing showed an impairment confined to shape perception. Thirdly, what aspects of shape perception are impaired in tactile agnosia? Our results indicate that despite accurate encoding of metric length and normal manual exploration strategies, the ability factually to perceive objects with the impaired hand, deteriorated as the complexity of shape increased. In addition, asymmetrical performance was not found for other body surfaces (e.g. her feet). Our results suggest that tactile shape perception can be disrupted independent of general spatial ability, tactile spatial ability, manual shape exploration, or even the precise perception of metric length in the tactile modality.
Article
Full-text available
When a group of dots within a random-dot array is discontinuously displaced, it appears as a moving region perceptually segregated from its stationary surround. The spastial, temporal and other constraints governing this effect are markedly different from those classically found for the apparent motion of isolated stimulus elements. The random-dot display appears to tap a low-level motion-detecting process, distinct from the more interpretive process elicited by the classical displays. The distinct contributions of these processes can be identified in 'multi-stable' displays which yield alternative percepts of apparent motion depending on which one or both of the processes is activated. Such experiments illustrate the interaction of relatively stimulus-constrained and relatively autonomous processes invisual perception.
Article
Full-text available
Recent evidence that the cerebellum is involved in perception and cognition challenges the prevailing view that its primary function is fine motor control. A new alternative hypothesis is that the lateral cerebellum is not activated by the control of movement per se, but is strongly engaged during the acquisition and discrimination of sensory information. Magnetic resonance imaging of the lateral cerebellar output (dentate) nucleus during passive and active sensory tasks confirmed this hypothesis. These findings suggest that the lateral cerebellum may be active during motor, perceptual, and cognitive performances specifically because of the requirement to process sensory data.
Article
Full-text available
In a series of experimental investigations of a subject with a unilateral impairment of tactile object recognition without impaired tactile sensation, several issues were addressed. First, is tactile agnosia secondary to a general impairment of spatial cognition? On tests of spatial ability, including those directed at the same spatial integration process assumed to be taxed by tactile object recognition, the subject performed well, implying a more specific impairment of high level, modality specific tactile perception. Secondly, within the realm of high level tactile perception, is there a distinction between the ability to derive shape ('what') and spatial ('where') information? Our testing showed an impairment confined to shape perception. Thirdly, what aspects of shape perception are impaired in tactile agnosia? Our results indicate that despite accurate encoding of metric length and normal manual exploration strategies, the ability tactually to perceive objects with the impaired hand, deteriorated as the complexity of shape increased. In addition, asymmetrical performance was not found for other body surfaces (e.g. her feet). Our results suggest that tactile shape perception can be disrupted independent of general spatial ability, tactile spatial ability, manual shape exploration, or even the precise perception of metric length in the tactile modality.
Article
Full-text available
In the present investigation, we identified cortical areas involved in the integration of bimanual inputs in human somatosensory cortex. Using functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG), we compared the responses to unilateral versus bilateral stimulation in anterior parietal cortex and areas in the Sylvian fissure of the contralateral hemisphere. The extent of fMRI activation on the upper bank of the Sylvian fissure, in the second somatosensory (S2) and the parietal ventral (PV) areas, was significantly larger for bilateral stimulation than for unilateral stimulation. Using MEG, we were able to describe the latency of response in S1 and S2/PV to unilateral and bilateral stimulation. The MEG response had three components under both stimulus conditions. An early peak in S1 at 40 ms, a middle peak in S2/PV at 80-160 ms, and three late peaks in S2/PV at 250-420 ms. There was an increase in magnetic field strength in S2/PV to bilateral stimulation at 300-400 ms post stimulus. The fMRI results indicate that, as in monkeys, S2/PV receives inputs from both the contralateral and ipsilateral hand. The MEG data suggest that information is processed serially from S1 to S2. The very late response in S2/PV indicates that extensive intrahemispheric processing occurs before information is transferred to the opposite hemisphere. The neural substrate for the increased activation and field strength at long latencies during bilateral stimulation can be accounted for in three ways. Under bilateral stimulus conditions, more neurons may be active, neuronal firing rate may increase, and/or neural activity may be more synchronous.
Article
Full-text available
When multiple objects are simultaneously present in a scene, the visual system must properly integrate the features associated with each object. It has been proposed that this "binding problem" is solved by selective attention to the locations of the objects [Treisman, A.M. & Gelade, E. (1980) Cogn. Psychol. 12, 97-136]. If spatial attention plays a role in feature integration, it should do so primarily when object location can serve as a binding cue. Using functional MRI (fMRI), we show that regions of the parietal cortex involved in spatial attention are more engaged in feature conjunction tasks than in single feature tasks when multiple objects are shown simultaneously at different locations but not when they are shown sequentially at the same location. These findings suggest that the spatial attention network of the parietal cortex is involved in feature binding but only when spatial information is available to resolve ambiguities about the relationships between object features.
Article
Full-text available
We investigated the contribution of the lateral intraparietal area (LIP) to the selection of saccadic eye movement targets and to saccade execution using muscimol-induced reversible inactivation and compared those effects with inactivation of the adjacent ventral intraparietal area (VIP) and with sham injections of saline into LIP. Three types of tasks were used: saccades to single visual or memorized targets, saccades to synchronous and asynchronous bilateral targets, and visual search of a target among distractors. LIP inactivation failed to produce deficits in the latency or accuracy of saccades to single targets, but it dramatically reduced the frequency of contralateral saccades in the presence of bilateral targets, and it increased search time for a contralateral target during serial visual search. In the latter task, the observed deficits might reflect either an ispilateral bias in saccadic search strategy or an attentional impairment in locating a target among flanking distractors within the contralateral field. No effects were observed on any of these tasks after VIP inactivation. These results suggest that one important contribution of LIP to oculomotor behavior is the selection of targets for saccades in the context of competing visual stimuli.
Article
Full-text available
Many diseases involve the cerebellum and produce ataxia, which is characterized by incoordination of balance, gait, extremity and eye movements, and dysarthria. Cerebellar lesions do not always manifest with ataxic motor syndromes, however. The cerebellar cognitive affective syndrome (CCAS) includes impairments in executive, visual-spatial, and linguistic abilities, with affective disturbance ranging from emotional blunting and depression, to disinhibition and psychotic features. The cognitive and psychiatric components of the CCAS, together with the ataxic motor disability of cerebellar disorders, are conceptualized within the dysmetria of thought hypothesis. This concept holds that a universal cerebellar transform facilitates automatic modulation of behavior around a homeostatic baseline, and the behavior being modulated is determined by the specificity of anatomic subcircuits, or loops, within the cerebrocerebellar system. Damage to the cerebellar component of the distributed neural circuit subserving sensorimotor, cognitive, and emotional processing disrupts the universal cerebellar transform, leading to the universal cerebellar impairment affecting the lesioned domain. The universal cerebellar impairment manifests as ataxia when the sensorimotor cerebellum is involved and as the CCAS when pathology is in the lateral hemisphere of the posterior cerebellum (involved in cognitive processing) or in the vermis (limbic cerebellum). Cognitive and emotional disorders may accompany cerebellar diseases or be their principal clinical presentation, and this has significance for the diagnosis and management of patients with cerebellar dysfunction.
Article
Full-text available
Findings from single-cell recording studies suggest that a comparison of the outputs of different pools of selectively tuned lower-level sensory neurons may be a general mechanism by which higher-level brain regions compute perceptual decisions. For example, when monkeys must decide whether a noisy field of dots is moving upward or downward, a decision can be formed by computing the difference in responses between lower-level neurons sensitive to upward motion and those sensitive to downward motion. Here we use functional magnetic resonance imaging and a categorization task in which subjects decide whether an image presented is a face or a house to test whether a similar mechanism is also at work for more complex decisions in the human brain and, if so, where in the brain this computation might be performed. Activity within the left dorsolateral prefrontal cortex is greater during easy decisions than during difficult decisions, covaries with the difference signal between face- and house-selective regions in the ventral temporal cortex, and predicts behavioural performance in the categorization task. These findings show that even for complex object categories, the comparison of the outputs of different pools of selectively tuned neurons could be a general mechanism by which the human brain computes perceptual decisions.
Article
Full-text available
In the last few years, the notion that the brain has a default or intrinsic mode of functioning has received increasing attention. The idea derives from observations that a consistent network of brain regions shows high levels of activity when no explicit task is performed and participants are asked simply to rest. The importance of this putative "default mode" is asserted on the basis of the substantial energy demand associated with such a resting state and of the suggestion that rest entails a finely tuned balance between metabolic demand and regionally regulated blood supply. These observations, together with the fact that the default network is more active at rest than it is in a range of explicit tasks, have led some to suggest that it reflects an absolute baseline, one that must be understood and used if we are to develop a comprehensive picture of brain functioning. Here, we examine the assumptions that are generally made in accepting the importance of the "default mode". We question the value, and indeed the interpretability, of the study of the resting state and suggest that observations made under resting conditions have no privileged status as a fundamental metric of brain functioning. In doing so, we challenge the utility of studies of the resting state in a number of important domains of research.
Article
Timing is a key element of most motor skills. This article first provides a brief review of methods and models in motor timing. The effects of neurological impairment on motor timing are then described, and findings from brain activation studies are summarized. It is concluded that motor timing involves subcortical and cortical neural circuits including cerebellum, basal ganglia, and prefrontal cortex.
Article
A chronic tactile agnosic with a small, MRI-documented left inferior parietal infarction underwent detailed somesthetic testing to assess (1) the acquisition of sensory data, (2) the manipulation of somatosensory percept and its association with previous knowledge, and (3) recognition occurring at a deeper taxonomic level. Results suggest that tactile agnosia can arise from faulty high-level perceptual processes, but that the ability to associate tactually defined objects and object parts with episodic memory can be preserved. Consistent with anatomic and physiologic studies in nonhuman primates, inferior parietal cortex (including Brodmann area 40, possibly area 39) appears to serve as a high-level somatosensory region.
Article
In accordance with its important role in prehensile activity, a large cortical area is devoted to representation of the digits. Within this large cortical zone in the macaque somatosensory cortex, the complexity of neuronal receptive field characteristics increases from area 3b to areas 1 and 2 (refs 1-7). This increase in complexity continues into the upper bank of the intraparietal sulcus, where the somatosensory cortex adjoins the parietal association cortex. In this bank, callosal connections are much denser than in the more anterior part of this cortical zone. We have now discovered a substantial number of neurons with receptive fields on the bilateral hands. It was previously thought that neuronal receptive fields were restricted to the contralateral side in this cortical zone. Neurons with bilateral receptive fields were not found after lesioning the postcentral gyrus in the contralateral hemisphere. The majority of these neurons had receptive fields of the most complex types, representing multiple digits, indicating that the interhemispheric transfer of information occurs at higher levels of the hierarchical processing in each hemisphere.
Article
In patients with lesions in the right hemisphere, frequently involving the posterior parietal regions, left-sided somatosensory (and visual and motor) deficits not only reflect a disorder of primary sensory processes, but also have a higher-order component related to a defective spatial representation of the body. This additional factor, related to right brain damage, is clinically relevant: contralesional hemianaesthesia (and hemianopia and hemiplegia) is more frequent in right brain-damaged patients than in patients with damage to the left side of the brain. Three main lines of investigation suggest the existence of this higher-order pathological factor. (i) Right brain-damaged patients with left hemineglect may show physiological evidence of preserved processing of somatosensory stimuli, of which they are not aware. Similar results have been obtained in the visual domain. (ii) Direction-specific vestibular, visual optokinetic and somatosensory or proprioceptive stimulations may displace spatial frames of reference in right brain-damaged patients with left hemineglect, reducing or increasing the extent of the patients' ipsilesional rightward directional error, and bring about similar directional effects in normal subjects. These stimulations, which may improve or worsen a number of manifestations of the neglect syndrome (such as extrapersonal and personal hemineglect), have similar effects on the severity of left somatosensory deficits (defective detection of tactile stimuli, position sense disorders). However, visuospatial hemineglect and the somatosensory deficits improved by these stimulations are independent, albeit related, disorders. (iii) The severity of left somatosensory deficits is affected by the spatial position of body segments, with reference to the midsagittal plane of the trunk. A general implication of these observations is that spatial (non-somatotopic) levels of representation contribute to corporeal awareness. The neural basis of these spatial frames includes the posterior parietal and the premotor frontal regions. These spatial representations could provide perceptual-premotor interfaces for the organization of movements (e.g. pointing, locomotion) directed towards targets in personal and extrapersonal space. In line with this view, there is evidence that the sensory stimulations that modulate left somatosensory deficits affect left motor disorders in a similar, direction-specific, fashion.
Article
Coordinated movement requires the normal operation of a number of different brain structures. Taking a modular perspective, it is argued that these structures provide unique computations that in concert produce coordinated behavior. The coordination problems of patients with cerebellar lesions can be understood as a problem in controlling and regulating the temporal patterns of movement. The timing capabilities of the cerebellum are not limited to the motor domain, but are utilized in perceptual tasks that require the precise representation of temporal information. Patients with cerebellar lesions are impaired in judging the duration of a short auditory stimulus or the velocity of a moving visual stimulus. The timing hypothesis also provides a computational account of the role of the cerebellum in certain types of learning. In particular, the cerebellum is essential for situations in which the animal must learn the temporal relationship between successive events such as in eyeblink conditioning. Modeling and behavioral studies suggest that the cerebellar timing system is best characterized as providing a near-infinite set of interval type timers rather than as a single clock with pacemaker or oscillatory properties. Thus, the cerebellum will be invoked whenever a task requires its timing function, but the exact neural elements that will be activated vary from task to task. The multiple-timer hypothesis suggests an alternative account of neuroimaging results implicating the cerebellum in higher cognitive processes. The activation may reflect the automatic preparation of multiple responses rather than be associated with processes such as semantic analysis, error detection, attention shifting, or response selection.
Article
We have recently demonstrated using fMRI that a region within the human lateral occipital complex (LOC) is activated by objects when either seen or touched. We term this cortical region LOtv for the lateral occipital tactile-visual region. We report here that LOtv voxels tend to be located in sub-regions of LOC that show preference for graspable visual objects over faces or houses. We further examine the nature of object representation in LOtv by studying its response to stimuli in three modalities: auditory, somatosensory and visual. If objects activate LOtv, irrespective of the modality used, the activation is likely to reflect a highly abstract representation. In contrast, activation specific to vision and touch may reflect common and exclusive attributes shared by these senses. We show here that while object activation is robust in both the visual and the somatosensory modalities, auditory signals do not evoke substantial responses in this region. The lack of auditory activation in LOtv cannot be explained by differences in task performance or by an ineffective auditory stimulation. Unlike vision and touch, auditory information contributes little to the recovery of the precise shape of objects. We therefore suggest that LOtv is involved in recovering the geometrical shape of objects.
Article
When two cylinders are passively moved in-phase on the volar surface of the right second and third fingers, human subjects estimate the stimuli to originate from one object, whereas two separate objects are estimated for out-of-phase stimuli. While five blindfolded subjects performed this estimation task, brain activity was measured by fMRI. The in-phase stimuli activated the left intraparietal and inferior parietal areas significantly more than did out-of-phase stimuli. These parietal regions may play important roles in the integration of moving tactile stimuli that are independently provided on plural fingers, from which subjects internally construct a single object.
Low-level and high-level processes in apparent motion
  • O J Braddick
  • OJ Braddick