Pitting binding against selection - electrophysiological measures of feature-based attention are attenuated by Gestalt object grouping

Department of Pediatrics, Albert Einstein College of Medicine, Bronx, NY 10461, USA.
European Journal of Neuroscience (Impact Factor: 3.18). 03/2012; 35(6):960-7. DOI: 10.1111/j.1460-9568.2012.08016.x
Source: PubMed


Humans have limited cognitive resources to process the nearly limitless information available in the environment. Endogenous, or 'top-down', selective attention to basic visual features such as color or motion is a common strategy for biasing resources in favor of the most relevant information sources in a given context. Opposing this top-down separation of features is a 'bottom-up' tendency to integrate, or bind, the various features that constitute objects. We pitted these two processes against each other in an electrophysiological experiment to test if top-down selective attention can overcome constitutive binding processes. Our results demonstrate that bottom-up binding processes can dominate top-down feature-based attention even when explicitly detrimental to task performance.

Download full-text


Available from: Adam C Snyder
  • Source
    • "Recent studies have reported that object-based representation also enhances the binding of different features within an object even when either of them is task-irrelevant, suggesting that the object-based enhancement of feature binding is automatically operated via attentional spreading across cortical regions that process features within the object (Fiebelkorn et al., 2010; Snyder and Foxe, 2012; Snyder et al., 2012). This indicates that object-based attention interacts with feature-based attention in a bottom-up manner. "
    [Show abstract] [Hide abstract]
    ABSTRACT: A large body of evidence supports that visual attention - the cognitive process of selectively concentrating on a salient or task-relevant subset of visual information - often works on object-based representation. Recent studies have postulated two possible accounts for the object-specific attentional advantage: attentional spreading and attentional prioritization, each of which modulates a bottom-up signal for sensory processing and a top-down signal for attentional allocation, respectively. It is still unclear which account can explain the object-specific attentional advantage. To address this issue, we examined the influence of object-specific advantage on two types of visual search: parallel search, invoked when a bottom-up signal is fully available at a target location, and serial search, invoked when a bottom-up signal is not enough to guide target selection and a top-down control for shifting of focused attention is required. Our results revealed that the object-specific advantage is given to the serial search but not to the parallel search, suggesting that object-based attention facilitates stimulus processing by affecting the priority of attentional shifts rather than by enhancing sensory signals. Thus, our findings support the notion that the object-specific attentional advantage can be explained by attentional prioritization but not attentional spreading.
    Full-text · Article · Feb 2014 · Frontiers in Human Neuroscience
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Noninvasive investigation of human sensory processing with high temporal resolution typically involves repeatedly presenting discrete stimuli and extracting an average event-related response from scalp recorded neuroelectric or neuromagnetic signals. While this approach is and has been extremely useful, it suffers from two drawbacks: a lack of naturalness in terms of the stimulus and a lack of precision in terms of the cortical response generators. Here we show that a linear modeling approach that exploits functional specialization in sensory systems can be used to rapidly obtain spatiotemporally precise responses to complex sensory stimuli using electroencephalography (EEG). We demonstrate the method by example through the controlled modulation of the contrast and coherent motion of visual stimuli. Regressing the data against these modulation signals produces spatially focal, highly temporally resolved response measures that are suggestive of specific activation of visual areas V1 and V6, respectively, based on their onset latency, their topographic distribution and the estimated location of their sources. We discuss our approach by comparing it with fMRI/MRI informed source analysis methods and, in doing so, we provide novel information on the timing of coherent motion processing in human V6. Generalizing such an approach has the potential to facilitate the rapid, inexpensive spatiotemporal localization of higher perceptual functions in behaving humans.
    Full-text · Article · Apr 2014 · NeuroImage
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The answer as to how visual attributes processed in different brain loci at different speeds are bound together to give us our unitary experience of the visual world remains unknown. In this study we investigated whether bound representations arise, as commonly assumed, through physiological interactions between cells in the visual areas. In a focal attentional task in which correct responses from either bound or unbound representations were possible, participants discriminated the color or orientation of briefly presented single bars. On the assumption that representations of the two attributes are bound, the accuracy of reporting the color and orientation should co-vary. By contrast, if the attributes are not mandatorily bound, the accuracy of reporting the two attributes should be independent. The results of our psychophysical studies reported here supported the latter, non-binding, relationship between visual features, suggesting that binding does not necessarily occur even under focal attention. We propose a task-contingent binding mechanism, postulating that binding occurs at late, post-perceptual (PP), stages through the intervention of memory.
    Full-text · Article · Oct 2014 · Frontiers in Human Neuroscience
Show more