Article

Remapping of Border Ownership in the Visual Cortex

Krieger Mind/Brain Institute and Department of Neuroscience, Johns Hopkins University, Baltimore, Maryland 21218.
The Journal of Neuroscience : The Official Journal of the Society for Neuroscience (Impact Factor: 6.75). 01/2013; 33(5):1964-74. DOI: 10.1523/JNEUROSCI.2797-12.2013
Source: PubMed

ABSTRACT We see objects as having continuity although the retinal image changes frequently. How such continuity is achieved is hard to understand, because neurons in the visual cortex have small receptive fields that are fixed on the retina, which means that a different set of neurons is activated every time the eyes move. Neurons in areas V1 and V2 of the visual cortex signal the local features that are currently in their receptive fields and do not show "remapping" when the image moves. However, subsets of neurons in these areas also carry information about global aspects, such as figure-ground organization. Here we performed experiments to find out whether figure-ground organization is remapped. We recorded single neurons in macaque V1 and V2 in which figure-ground organization is represented by assignment of contours to regions (border ownership). We found previously that border-ownership signals persist when a figure edge is switched to an ambiguous edge by removing the context. We now used this paradigm to see whether border ownership transfers when the ambiguous edge is moved across the retina. In the new position, the edge activated a different set of neurons at a different location in cortex. We found that border ownership was transferred to the newly activated neurons. The transfer occurred whether the edge was moved by a saccade or by moving the visual display. Thus, although the contours are coded in retinal coordinates, their assignment to objects is maintained across movements of the retinal image.

0 Bookmarks
 · 
63 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Camouflaged animals that have very similar textures to their surroundings are difficult to detect when stationary. However, when an animal moves, humans readily see a figure at a different depth than the background. How do humans perceive a figure breaking camouflage, even though the texture of the figure and its background may be statistically identical in luminance? We present a model that demonstrates how the primate visual system performs figure–ground segregation in extreme cases of breaking camouflage based on motion alone. Border-ownership signals develop as an emergent property in model V2 units whose receptive fields are nearby kinetically defined borders that separate the figure and background. Model simulations support border-ownership as a general mechanism by which the visual system performs figure–ground segregation, despite whether figure–ground boundaries are defined by luminance or motion contrast. The gradient of motion- and luminance-related border-ownership signals explains the perceived depth ordering of the foreground and background surfaces. Our model predicts that V2 neurons, which are sensitive to kinetic edges, are selective to border-ownership (magnocellular B cells). A distinct population of model V2 neurons is selective to border-ownership in figures defined by luminance contrast (parvocellular B cells). B cells in model V2 receive feedback from neurons in V4 and MT with larger receptive fields to bias border-ownership signals toward the figure. We predict that neurons in V4 and MT sensitive to kinetically defined figures play a crucial role in determining whether the foreground surface accretes, deletes, or produces a shearing motion with respect to the background.
    Vision Research 11/2014; 106. DOI:10.1016/j.visres.2014.11.002 · 2.38 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.
    Frontiers in Psychology 01/2014; 5:1457. DOI:10.3389/fpsyg.2014.01457 · 2.80 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.
    Frontiers in Computational Neuroscience 08/2014; 8:93. DOI:10.3389/fncom.2014.00093 · 2.23 Impact Factor