[Show abstract][Hide abstract] ABSTRACT: Paying attention to visual stimuli is typically accompanied by event-related desynchronizations (ERD) of ongoing alpha (7–14 Hz) activity in visual cortex. The present study used time-frequency based analyses to investigate the role of impaired alpha ERD in visual processing deficits in schizophrenia (Sz). Subjects viewed sinusoidal gratings of high (HSF) and low (LSF) spatial frequency (SF) designed to test functioning of the parvo-vs. magnocellular pathways, respectively. Patients with Sz and healthy controls paid attention selectively to either the LSF or HSF gratings which were presented in random order. Event-related brain potentials (ERPs) were recorded to all stimuli. As in our previous study, it was found that Sz patients were selectively impaired at detecting LSF target stimuli and that ERP amplitudes to LSF stimuli were diminished, both for the early sensory-evoked components and for the attend minus unattend difference component (the Selection Negativity), which is generally regarded as a specific index of feature-selective attention. In the time-frequency domain, the differential ERP deficits to LSF stimuli were echoed in a virtually absent theta-band phase locked response to both unattended and attended LSF stimuli (along with relatively intact theta-band activity for HSF stimuli). In contrast to the theta-band evoked responses which were tightly stimulus locked, stimulus-induced desynchronizations of ongoing alpha activity were not tightly stimulus locked and were apparent only in induced power analyses. Sz patients were significantly impaired in the attention-related modulation of ongoing alpha activity for both HSF and LSF stimuli. These deficits correlated with patients' behavioral deficits in visual information processing as well as with visually based neurocognitive deficits. These findings suggest an additional, pathway-independent, mechanism by which deficits in early visual processing contribute to overall cognitive impairment in Sz.
Frontiers in Human Neuroscience 08/2015; 9. DOI:10.3389/fnhum.2015.00371 · 2.90 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: To isolate neural correlates of conscious perception (NCCs), a standard approach has been to contrast neural activity elicited by identical stimuli of which subjects are aware vs. unaware. Because conscious experience is private, determining whether a stimulus was consciously perceived requires subjective report: e.g., button-presses indicating detection, visibility ratings, verbal reports, etc. This reporting requirement introduces a methodological confound when attempting to isolate NCCs: The neural processes responsible for accessing and reporting one's percept are difficult to distinguish from those underlying the conscious percept itself. Here, we review recent attempts to circumvent this issue via a modified inattentional blindness paradigm (Pitts et al., 2012) and present new data from a backward masking experiment in which task-relevance and visual awareness were manipulated in a 2 × 2 crossed design. In agreement with our previous inattentional blindness results, stimuli that were consciously perceived yet not immediately accessed for report (aware, task-irrelevant condition) elicited a mid-latency posterior ERP negativity (~200-240 ms), while stimuli that were accessed for report (aware, task-relevant condition) elicited additional components including a robust P3b (~380-480 ms) subsequent to the mid-latency negativity. Overall, these results suggest that some of the NCCs identified in previous studies may be more closely linked with accessing and maintaining perceptual information for reporting purposes than with encoding the conscious percept itself. An open question is whether the remaining NCC candidate (the ERP negativity at 200-240 ms) reflects visual awareness or object-based attention.
Frontiers in Psychology 10/2014; 5:1078. DOI:10.3389/fpsyg.2014.01078 · 2.80 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Previous research on attentional selection of features has yielded seemingly contradictory results: many experiments have found "global" facilitation of attended features across the entire visual field, whereas classic event related potential (ERP) studies reported an enhancement of attended features at the attended location only. To test the hypothesis that these conflicting results can be explained by temporal stimulus differences, we compared the time-course of feature-selective attention inside and outside the spatial focus of attention. We presented fields of randomly moving purple dots on either side of fixation. Participants were audio-visually cued to attend to either red or blue dots on either the left or right side in order to detect brief coherent motion targets. After a delay, which allowed participants sufficient time to shift attention to the cued location, the purple dots on both sides changed color simultaneously so that half of them became blue and the other half red. Each of these four dot populations flickered at a different frequency, thereby eliciting distinguishable steady-state visual evoked potentials (SSVEPs). This allowed us to concurrently measure the time-course of feature-selective attentional enhancement of stimulus processing in visual cortex after onset of the attended feature on both the attended and the unattended side. The onset of feature-selective attention on the attended side occurred over 100 ms earlier than on the unattended side. The finding that feature-selective attention is not spatially global from the outset, but that its effect spreads to unattended locations with a temporal delay resolves previous contradictions between studies that found global selection of features and studies that failed to find such global selection because they used briefly flashed stimuli. We speculate that the observed delay might be caused by the time needed to coordinate attentional control signals between hemispheres, although the exact mechanisms are still unknown.
Meeting abstract presented at VSS 2014
Journal of Vision 08/2014; 14(10):23-23. DOI:10.1167/14.10.23 · 2.73 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: A primary goal in cognitive neuroscience is to identify neural correlates of conscious perception (NCC). By contrasting conditions in which subjects are aware versus unaware of identical visual stimuli, a number of candidate NCCs have emerged, among them induced gamma band activity in the EEG and the P3 event-related potential. In most previous studies, however, the critical stimuli were always directly relevant to the subjects' task, such that aware versus unaware contrasts may well have included differences in post-perceptual processing in addition to differences in conscious perception per se. Here, in a series of EEG experiments, visual awareness and task relevance were manipulated independently. Induced gamma activity and the P3 were absent for task-irrelevant stimuli regardless of whether subjects were aware of such stimuli. For task-relevant stimuli, gamma and the P3 were robust and dissociable, indicating that each reflects distinct post-perceptual processes necessary for carrying-out the task but not for consciously perceiving the stimuli. Overall, this pattern of results challenges a number of previous proposals linking gamma band activity and the P3 to conscious perception.
[Show abstract][Hide abstract] ABSTRACT: A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound.
The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 07/2014; 34(29):9817-24. DOI:10.1523/JNEUROSCI.4869-13.2014 · 6.75 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: A growing body of research suggests that the predictive power of working memory (WM) capacity for measures of intellectual aptitude is due to the ability to control attention and select relevant information. Crucially, attentional mechanisms implicated in controlling access to WM are assumed to be domain-general, yet reports of enhanced attentional abilities in individuals with larger WM capacities are primarily within the visual domain. Here, we directly test the link between WM capacity and early attentional gating across sensory domains, hypothesizing that measures of visual WM capacity should predict an individual's capacity to allocate auditory selective attention. To address this question, auditory ERPs were recorded in a linguistic dichotic listening task, and individual differences in ERP modulations by attention were correlated with estimates of WM capacity obtained in a separate visual change detection task. Auditory selective attention enhanced ERP amplitudes at an early latency (ca. 70-90 msec), with larger P1 components elicited by linguistic probes embedded in an attended narrative. Moreover, this effect was associated with greater individual estimates of visual WM capacity. These findings support the view that domain-general attentional control mechanisms underlie the wide variation of WM capacity across individuals.
[Show abstract][Hide abstract] ABSTRACT: This study investigated the effects of attentional load on neural responses to attended and irrelevant visual stimuli by recording high-density event-related potentials (ERPs) from the scalp in normal adult subjects. Peripheral (upper and lower visual field) and central stimuli were presented in random order at a rapid rate while subjects responded to targets among the central stimuli. Color detection and color-orientation conjunction search tasks were used as the low- and high-load tasks, respectively. Behavioral results showed significant load effects on both accuracy and reaction time for target detections. ERP results revealed no significant load effect on the initial C1 component (60-100 ms) evoked by either central-relevant or peripheral-irrelevant stimuli. Source analysis with dipole modeling confirmed previous reports that the C1 includes the initial evoked response in primary visual cortex. Source analyses indicated that high attentional load enhanced the early (70-140 ms) neural response to central-relevant stimuli in ventral-lateral extrastriate cortex, whereas load effects on peripheral-irrelevant stimulus processing started at 110 ms and were localized to more dorsal and anterior extrastriate cortical areas. These results provide evidence that the earliest stages of visual cortical processing are not modified by attentional load and show that attentional load affects the processing of task relevant and irrelevant stimuli in different ways.
Human Brain Mapping 07/2014; 35(7):3008-24. DOI:10.1002/hbm.22381 · 6.92 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: An essential task of our perceptual systems is to bind together the distinctive features of single objects and events into unitary percepts, even when those features are registered in different sensory modalities. In cases where auditory and visual inputs are spatially incongruent, they may still be perceived as belonging to a single event at the location of the visual stimulus -- a phenomenon known as the 'ventriloquist illusion'. The present study examined how audio-visual temporal congruence influences the ventriloquist illusion and characterized its neural underpinnings with functional magnetic resonance imaging (fMRI). Behaviorally, the ventriloquist illusion was reduced for asynchronous versus synchronous audio-visual stimuli, in accordance with previous reports. Neural activity patterns associated with the ventriloquist effect were consistently observed in the planum temporale (PT), with a reduction in illusion-related fMRI-signals ipsilateral to visual stimulation for central sounds perceived peripherally and a contralateral increase in illusion-related fMRI-signals for peripheral sounds perceived centrally. Moreover, it was found that separate but adjacent regions within the PT were preferentially activated for ventriloquist illusions produced by synchronous and asynchronous audio-visual stimulation. We conclude that the left-right balance of neural activity in the PT represents the neural code that underlies the ventriloquist illusion, with greater activity in the cerebral hemisphere contralateral to the direction of the perceived shift of sound location.
[Show abstract][Hide abstract] ABSTRACT: Spatial frequency (SF) selection has long been recognized to play a role in global and local processing, though the nature of the relationship between SF processing and global/local perception is debated. Previous studies have shown that attention to relatively lower SFs facilitates global perception, and that attention to relatively higher SFs facilitates local perception. Here we recorded event-related brain potentials (ERPs) to investigate whether processing of low versus high SFs is modulated automatically during global and local perception, and to examine the time course of any such effects. Participants compared bilaterally presented hierarchical letter stimuli and attended to either the global or local levels. Irrelevant SF grating probes flashed at the center of the display 200 ms after the onset of the hierarchical letter stimuli could either be low or high in SF. It was found that ERPs elicited by the SF grating probes differed as a function of attended level (global versus local). ERPs elicited by low SF grating probes were more positive in the interval 196-236 ms during global than local attention, and this difference was greater over the right occipital scalp. In contrast, ERPs elicited by the high SF gratings were more positive in the interval 250-290 ms during local than global attention, and this difference was bilaterally distributed over the occipital scalp. These results indicate that directing attention to global versus local levels of a hierarchical display facilitates automatic perceptual processing of low versus high SFs, respectively, and this facilitation is not limited to the locations occupied by the hierarchical display. The relatively long latency of these attention-related ERP modulations suggests that initial (early) SF processing is not affected by attention to hierarchical level, lending support to theories positing a higher level mechanism to underlie the relationship between SF processing and global versus local perception.
Frontiers in Psychology 04/2014; 5:277. DOI:10.3389/fpsyg.2014.00277 · 2.80 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Object-based theories of attention propose that the selection of an object's feature leads to the rapid selection of all other constituent features, even those that are task irrelevant. We used magnetoencephalographic recordings to examine the timing and sequencing of neural activity patterns in feature-specific cortical areas as human subjects performed an object-based attention task. Subjects attended to one of two superimposed moving dot arrays that were perceived as transparent surfaces on the basis either of color or speed of motion. When surface motion was attended, the magnetoencephalographic waveforms showed enhanced activity in the motion-specific cortical area starting at ∼150 ms after motion onset, followed after ∼60 ms by enhanced activity in the color-specific area. When surface color was attended, this temporal sequence was reversed. This rapid sequential activation of the relevant and irrelevant feature modules provides a neural basis for the binding of an object's features into a unitary perceptual experience.
[Show abstract][Hide abstract] ABSTRACT: In many common situations such as driving an automobile it is advantageous to attend concurrently to events at different locations (e.g., the car in front, the pedestrian to the side). While spatial attention can be divided effectively between separate locations, studies investigating attention to nonspatial features have often reported a "global effect", whereby items having the attended feature may be preferentially processed throughout the entire visual field. These findings suggest that spatial and feature-based attention may at times act in direct opposition: spatially divided foci of attention cannot be truly independent if feature attention is spatially global and thereby affects all foci equally. In two experiments, human observers attended concurrently to one of two overlapping fields of dots of different colors presented in both the left and right visual fields. When the same color or two different colors were attended on the two sides, deviant targets were detected accurately, and visual-cortical potentials elicited by attended dots were enhanced. However, when the attended color on one side matched the ignored color on the opposite side, attentional modulation of cortical potentials was abolished. This loss of feature selectivity could be attributed to enhanced processing of unattended items that shared the color of the attended items in the opposite field. Thus, while it is possible to attend to two different colors at the same time, this ability is fundamentally constrained by spatially global feature enhancement in early visual-cortical areas, which is obligatory and persists even when it explicitly conflicts with task demands.
The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 11/2013; 33(46):18200-7. DOI:10.1523/JNEUROSCI.1913-13.2013 · 6.75 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Multisensory interactions can lead to illusory percepts, as exemplified by the sound-induced extra flash illusion (SIFI: Shams et al., 2000, 2002). In this illusion, an audio-visual stimulus sequence consisting of two pulsed sounds and a light flash presented within a 100 ms time window generates the visual percept of two flashes. Here, we used colored visual stimuli to investigate whether concurrent auditory stimuli can affect the perceived features of the illusory flash. Zero, one or two pulsed sounds were presented concurrently with either a red or green flash or with two flashes of different colors (red followed by green) in rapid sequence. By querying both the number and color of the participants' visual percepts, we found that the double flash illusion is stimulus specific: i.e., two sounds paired with one red or one green flash generated the percept of two red or two green flashes, respectively. This implies that the illusory second flash is induced at a level of visual processing after perceived color has been encoded. In addition, we found that the presence of two sounds influenced the integration of color information from two successive flashes. In the absence of any sounds, a red and a green flash presented in rapid succession fused to form a single orange percept, but when accompanied by two sounds, this integrated orange percept was perceived to flash twice on a significant proportion of trials. In addition, the number of concurrent auditory stimuli modified the degree to which the successive flashes were integrated to an orange percept versus maintained as separate red-green percepts. Overall, these findings show that concurrent auditory input can affect both the temporal and featural properties of visual percepts.
Vision research 10/2013; 93. DOI:10.1016/j.visres.2013.10.013 · 2.38 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The way we perceive an object depends both on feedforward, bottom-up processing of its physical stimulus properties and on top-down factors such as attention, context, expectation, and task relevance. Here we compared neural activity elicited by varying perceptions of the same physical image-a bistable moving image in which perception spontaneously alternates between dissociated fragments and a single, unified object. A time-frequency analysis of EEG changes associated with the perceptual switch from object to fragment and vice versa revealed a greater decrease in alpha (8-12 Hz) accompanying the switch to object percept than to fragment percept. Recordings of event-related potentials elicited by irrelevant probes superimposed on the moving image revealed an enhanced positivity between 184 and 212 ms when the probes were contained within the boundaries of the perceived unitary object. The topography of the positivity (P2) in this latency range elicited by probes during object perception was distinct from the topography elicited by probes during fragment perception, suggesting that the neural processing of probes differed as a function of perceptual state. Two source localization algorithms estimated the neural generator of this object-related difference to lie in the lateral occipital cortex, a region long associated with object perception. These data suggest that perceived objects attract attention, incorporate visual elements occurring within their boundaries into unified object representations, and enhance the visual processing of elements occurring within their boundaries. Importantly, the perceived object in this case emerged as a function of the fluctuating perceptual state of the viewer.
Journal of Vision 07/2013; 13(13). DOI:10.1167/13.13.17 · 2.73 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, this study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2-4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of colocalized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.
The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 05/2013; 33(21):9194-9201. DOI:10.1523/JNEUROSCI.5902-12.2013 · 6.75 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Our senses interact in daily life through multisensory integration, facilitating perceptual processes and behavioral responses. The neural mechanisms proposed to underlie this multisensory facilitation include anatomical connections directly linking early sensory areas, indirect connections to higher-order multisensory regions, as well as thalamic connections. Here we examine the relationship between white matter connectivity, as assessed with diffusion tensor imaging, and individual differences in multisensory facilitation and provide the first demonstration of a relationship between anatomical connectivity and multisensory processing in typically developed individuals. Using a whole-brain analysis and contrasting anatomical models of multisensory processing we found that increased connectivity between parietal regions and early sensory areas was associated with the facilitation of reaction times to multisensory (auditory-visual) stimuli. Furthermore, building on prior animal work suggesting the involvement of the superior colliculus in this process, using probabilistic tractography we determined that the strongest cortical projection area connected with the superior colliculus includes the region of connectivity implicated in our independent whole-brain analysis.
[Show abstract][Hide abstract] ABSTRACT: It is widely reported that inverting a face dramatically affects its recognition. Previous studies have shown that face inversion increases the amplitude and delays the latency of the face-specific N170 component of the event-related potential (ERP) and also enhances the amplitude of the occipital P1 component (latency 100-132ms). The present study investigates whether these effects of face inversion can be modulated by visual spatial attention. Participants viewed two streams of visual stimuli, one to the left and one to the right of fixation. One stream consisted of a sequence of alphanumeric characters at 6.67Hz, and the other stream consisted of a series of upright and inverted images of faces and houses presented in randomized order. The participants' task was to attend selectively to one or the other of the streams (during different blocks) in order to detect infrequent target stimuli. ERPs elicited by inverted faces showed larger P1 amplitudes compared to upright faces, but only when the faces were attended. In contrast, the N170 amplitude was larger to inverted than to upright faces only when the faces were not attended. The N170 peak latency was delayed to inverted faces regardless of attention condition. These inversion effects were face specific, as similar effects were absent for houses. These results suggest that early stages of face-specific processing can be enhanced by attention, but when faces are not attended the onset of face-specific processing is delayed until the latency range of the N170.