Remapping of Border Ownership in the Visual Cortex
Krieger Mind/Brain Institute and Department of Neuroscience, Johns Hopkins University, Baltimore, Maryland 21218.The Journal of Neuroscience : The Official Journal of the Society for Neuroscience (Impact Factor: 6.34). 01/2013; 33(5):1964-74. DOI: 10.1523/JNEUROSCI.2797-12.2013
We see objects as having continuity although the retinal image changes frequently. How such continuity is achieved is hard to understand, because neurons in the visual cortex have small receptive fields that are fixed on the retina, which means that a different set of neurons is activated every time the eyes move. Neurons in areas V1 and V2 of the visual cortex signal the local features that are currently in their receptive fields and do not show "remapping" when the image moves. However, subsets of neurons in these areas also carry information about global aspects, such as figure-ground organization. Here we performed experiments to find out whether figure-ground organization is remapped. We recorded single neurons in macaque V1 and V2 in which figure-ground organization is represented by assignment of contours to regions (border ownership). We found previously that border-ownership signals persist when a figure edge is switched to an ambiguous edge by removing the context. We now used this paradigm to see whether border ownership transfers when the ambiguous edge is moved across the retina. In the new position, the edge activated a different set of neurons at a different location in cortex. We found that border ownership was transferred to the newly activated neurons. The transfer occurred whether the edge was moved by a saccade or by moving the visual display. Thus, although the contours are coded in retinal coordinates, their assignment to objects is maintained across movements of the retinal image.
Article: Border–ownership coding01/2013; 8(10):30040. DOI:10.4249/scholarpedia.30040
- [Show abstract] [Hide abstract]
ABSTRACT: How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.Frontiers in Psychology 01/2014; 5:1457. DOI:10.3389/fpsyg.2014.01457 · 2.80 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: Contour adaptation (CA) is a recently described paradigm that renders otherwise salient visual stimuli temporarily perceptually invisible. Here we investigate whether this illusion can be exploited to study visual awareness. We found that CA can induce seconds of sustained invisibility following similarly long periods of uninterrupted adaptation. Furthermore, even fragmented adaptors are capable of producing CA, with the strength of CA increasing monotonically as the adaptors encompass a greater fraction of the stimulus outline. However, different types of adaptor patterns, such as distinctive shapes or illusory contours, produce equivalent levels of CA suggesting that the main determinants of CA are low-level stimulus characteristics, with minimal modulation by higher-order visual processes. Taken together, our results indicate that CA has desirable properties for studying visual awareness, including the production of prolonged periods of perceptual dissociation from stimulation as well as parametric dependencies of that dissociation on a host of stimulus parameters.Consciousness and Cognition 05/2014; 26(1):37–50. DOI:10.1016/j.concog.2014.02.007 · 2.31 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.