[Show abstract][Hide abstract] ABSTRACT: A primary goal in cognitive neuroscience is to identify neural correlates of conscious perception (NCC). By contrasting conditions in which subjects are aware versus unaware of identical visual stimuli, a number of candidate NCCs have emerged, among them induced gamma band activity in the EEG and the P3 event-related potential. In most previous studies, however, the critical stimuli were always directly relevant to the subjects' task, such that aware versus unaware contrasts may well have included differences in post-perceptual processing in addition to differences in conscious perception per se. Here, in a series of EEG experiments, visual awareness and task relevance were manipulated independently. Induced gamma activity and the P3 were absent for task-irrelevant stimuli regardless of whether subjects were aware of such stimuli. For task-relevant stimuli, gamma and the P3 were robust and dissociable, indicating that each reflects distinct post-perceptual processes necessary for carrying-out the task but not for consciously perceiving the stimuli. Overall, this pattern of results challenges a number of previous proposals linking gamma band activity and the P3 to conscious perception.
[Show abstract][Hide abstract] ABSTRACT: A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound.
The Journal of neuroscience : the official journal of the Society for Neuroscience. 07/2014; 34(29):9817-24.
[Show abstract][Hide abstract] ABSTRACT: This study investigated the effects of attentional load on neural responses to attended and irrelevant visual stimuli by recording high-density event-related potentials (ERPs) from the scalp in normal adult subjects. Peripheral (upper and lower visual field) and central stimuli were presented in random order at a rapid rate while subjects responded to targets among the central stimuli. Color detection and color-orientation conjunction search tasks were used as the low- and high-load tasks, respectively. Behavioral results showed significant load effects on both accuracy and reaction time for target detections. ERP results revealed no significant load effect on the initial C1 component (60-100 ms) evoked by either central-relevant or peripheral-irrelevant stimuli. Source analysis with dipole modeling confirmed previous reports that the C1 includes the initial evoked response in primary visual cortex. Source analyses indicated that high attentional load enhanced the early (70-140 ms) neural response to central-relevant stimuli in ventral-lateral extrastriate cortex, whereas load effects on peripheral-irrelevant stimulus processing started at 110 ms and were localized to more dorsal and anterior extrastriate cortical areas. These results provide evidence that the earliest stages of visual cortical processing are not modified by attentional load and show that attentional load affects the processing of task relevant and irrelevant stimuli in different ways.
Human Brain Mapping 07/2014; 35(7):3008-24. · 6.88 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: An essential task of our perceptual systems is to bind together the distinctive features of single objects and events into unitary percepts, even when those features are registered in different sensory modalities. In cases where auditory and visual inputs are spatially incongruent, they may still be perceived as belonging to a single event at the location of the visual stimulus -- a phenomenon known as the 'ventriloquist illusion'. The present study examined how audio-visual temporal congruence influences the ventriloquist illusion and characterized its neural underpinnings with functional magnetic resonance imaging (fMRI). Behaviorally, the ventriloquist illusion was reduced for asynchronous versus synchronous audio-visual stimuli, in accordance with previous reports. Neural activity patterns associated with the ventriloquist effect were consistently observed in the planum temporale (PT), with a reduction in illusion-related fMRI-signals ipsilateral to visual stimulation for central sounds perceived peripherally and a contralateral increase in illusion-related fMRI-signals for peripheral sounds perceived centrally. Moreover, it was found that separate but adjacent regions within the PT were preferentially activated for ventriloquist illusions produced by synchronous and asynchronous audio-visual stimulation. We conclude that the left-right balance of neural activity in the PT represents the neural code that underlies the ventriloquist illusion, with greater activity in the cerebral hemisphere contralateral to the direction of the perceived shift of sound location.
[Show abstract][Hide abstract] ABSTRACT: Object-based theories of attention propose that the selection of an object's feature leads to the rapid selection of all other constituent features, even those that are task irrelevant. We used magnetoencephalographic recordings to examine the timing and sequencing of neural activity patterns in feature-specific cortical areas as human subjects performed an object-based attention task. Subjects attended to one of two superimposed moving dot arrays that were perceived as transparent surfaces on the basis either of color or speed of motion. When surface motion was attended, the magnetoencephalographic waveforms showed enhanced activity in the motion-specific cortical area starting at ∼150 ms after motion onset, followed after ∼60 ms by enhanced activity in the color-specific area. When surface color was attended, this temporal sequence was reversed. This rapid sequential activation of the relevant and irrelevant feature modules provides a neural basis for the binding of an object's features into a unitary perceptual experience.
[Show abstract][Hide abstract] ABSTRACT: To isolate neural correlates of conscious perception (NCCs), a standard approach has been to contrast neural activity elicited by identical stimuli of which subjects are aware vs. unaware. Because conscious experience is private, determining whether a stimulus was consciously perceived requires subjective report: e.g., button-presses indicating detection, visibility ratings, verbal reports, etc. This reporting requirement introduces a methodological confound when attempting to isolate NCCs: The neural processes responsible for accessing and reporting one's percept are difficult to distinguish from those underlying the conscious percept itself. Here, we review recent attempts to circumvent this issue via a modified inattentional blindness paradigm (Pitts et al., 2012) and present new data from a backward masking experiment in which task-relevance and visual awareness were manipulated in a 2 × 2 crossed design. In agreement with our previous inattentional blindness results, stimuli that were consciously perceived yet not immediately accessed for report (aware, task-irrelevant condition) elicited a mid-latency posterior ERP negativity (~200-240 ms), while stimuli that were accessed for report (aware, task-relevant condition) elicited additional components including a robust P3b (~380-480 ms) subsequent to the mid-latency negativity. Overall, these results suggest that some of the NCCs identified in previous studies may be more closely linked with accessing and maintaining perceptual information for reporting purposes than with encoding the conscious percept itself. An open question is whether the remaining NCC candidate (the ERP negativity at 200-240 ms) reflects visual awareness or object-based attention.
Frontiers in Psychology 01/2014; 5:1078. · 2.80 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Spatial frequency (SF) selection has long been recognized to play a role in global and local processing, though the nature of the relationship between SF processing and global/local perception is debated. Previous studies have shown that attention to relatively lower SFs facilitates global perception, and that attention to relatively higher SFs facilitates local perception. Here we recorded event-related brain potentials (ERPs) to investigate whether processing of low versus high SFs is modulated automatically during global and local perception, and to examine the time course of any such effects. Participants compared bilaterally presented hierarchical letter stimuli and attended to either the global or local levels. Irrelevant SF grating probes flashed at the center of the display 200 ms after the onset of the hierarchical letter stimuli could either be low or high in SF. It was found that ERPs elicited by the SF grating probes differed as a function of attended level (global versus local). ERPs elicited by low SF grating probes were more positive in the interval 196-236 ms during global than local attention, and this difference was greater over the right occipital scalp. In contrast, ERPs elicited by the high SF gratings were more positive in the interval 250-290 ms during local than global attention, and this difference was bilaterally distributed over the occipital scalp. These results indicate that directing attention to global versus local levels of a hierarchical display facilitates automatic perceptual processing of low versus high SFs, respectively, and this facilitation is not limited to the locations occupied by the hierarchical display. The relatively long latency of these attention-related ERP modulations suggests that initial (early) SF processing is not affected by attention to hierarchical level, lending support to theories positing a higher level mechanism to underlie the relationship between SF processing and global versus local perception.
Frontiers in Psychology 01/2014; 5:277. · 2.80 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: In many common situations such as driving an automobile it is advantageous to attend concurrently to events at different locations (e.g., the car in front, the pedestrian to the side). While spatial attention can be divided effectively between separate locations, studies investigating attention to nonspatial features have often reported a "global effect", whereby items having the attended feature may be preferentially processed throughout the entire visual field. These findings suggest that spatial and feature-based attention may at times act in direct opposition: spatially divided foci of attention cannot be truly independent if feature attention is spatially global and thereby affects all foci equally. In two experiments, human observers attended concurrently to one of two overlapping fields of dots of different colors presented in both the left and right visual fields. When the same color or two different colors were attended on the two sides, deviant targets were detected accurately, and visual-cortical potentials elicited by attended dots were enhanced. However, when the attended color on one side matched the ignored color on the opposite side, attentional modulation of cortical potentials was abolished. This loss of feature selectivity could be attributed to enhanced processing of unattended items that shared the color of the attended items in the opposite field. Thus, while it is possible to attend to two different colors at the same time, this ability is fundamentally constrained by spatially global feature enhancement in early visual-cortical areas, which is obligatory and persists even when it explicitly conflicts with task demands.
Journal of Neuroscience 11/2013; 33(46):18200-7. · 6.91 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Multisensory interactions can lead to illusory percepts, as exemplified by the sound-induced extra flash illusion (SIFI: Shams et al., 2000, 2002). In this illusion, an audio-visual stimulus sequence consisting of two pulsed sounds and a light flash presented within a 100 ms time window generates the visual percept of two flashes. Here, we used colored visual stimuli to investigate whether concurrent auditory stimuli can affect the perceived features of the illusory flash. Zero, one or two pulsed sounds were presented concurrently with either a red or green flash or with two flashes of different colors (red followed by green) in rapid sequence. By querying both the number and color of the participants' visual percepts, we found that the double flash illusion is stimulus specific: i.e., two sounds paired with one red or one green flash generated the percept of two red or two green flashes, respectively. This implies that the illusory second flash is induced at a level of visual processing after perceived color has been encoded. In addition, we found that the presence of two sounds influenced the integration of color information from two successive flashes. In the absence of any sounds, a red and a green flash presented in rapid succession fused to form a single orange percept, but when accompanied by two sounds, this integrated orange percept was perceived to flash twice on a significant proportion of trials. In addition, the number of concurrent auditory stimuli modified the degree to which the successive flashes were integrated to an orange percept versus maintained as separate red-green percepts. Overall, these findings show that concurrent auditory input can affect both the temporal and featural properties of visual percepts.
[Show abstract][Hide abstract] ABSTRACT: The way we perceive an object depends both on feedforward, bottom-up processing of its physical stimulus properties and on top-down factors such as attention, context, expectation, and task relevance. Here we compared neural activity elicited by varying perceptions of the same physical image-a bistable moving image in which perception spontaneously alternates between dissociated fragments and a single, unified object. A time-frequency analysis of EEG changes associated with the perceptual switch from object to fragment and vice versa revealed a greater decrease in alpha (8-12 Hz) accompanying the switch to object percept than to fragment percept. Recordings of event-related potentials elicited by irrelevant probes superimposed on the moving image revealed an enhanced positivity between 184 and 212 ms when the probes were contained within the boundaries of the perceived unitary object. The topography of the positivity (P2) in this latency range elicited by probes during object perception was distinct from the topography elicited by probes during fragment perception, suggesting that the neural processing of probes differed as a function of perceptual state. Two source localization algorithms estimated the neural generator of this object-related difference to lie in the lateral occipital cortex, a region long associated with object perception. These data suggest that perceived objects attract attention, incorporate visual elements occurring within their boundaries into unified object representations, and enhance the visual processing of elements occurring within their boundaries. Importantly, the perceived object in this case emerged as a function of the fluctuating perceptual state of the viewer.
Journal of Vision 07/2013; 13(13). · 2.48 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, this study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2-4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of colocalized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.
Journal of Neuroscience 05/2013; 33(21):9194-9201. · 6.91 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Our senses interact in daily life through multisensory integration, facilitating perceptual processes and behavioral responses. The neural mechanisms proposed to underlie this multisensory facilitation include anatomical connections directly linking early sensory areas, indirect connections to higher-order multisensory regions, as well as thalamic connections. Here we examine the relationship between white matter connectivity, as assessed with diffusion tensor imaging, and individual differences in multisensory facilitation and provide the first demonstration of a relationship between anatomical connectivity and multisensory processing in typically developed individuals. Using a whole-brain analysis and contrasting anatomical models of multisensory processing we found that increased connectivity between parietal regions and early sensory areas was associated with the facilitation of reaction times to multisensory (auditory-visual) stimuli. Furthermore, building on prior animal work suggesting the involvement of the superior colliculus in this process, using probabilistic tractography we determined that the strongest cortical projection area connected with the superior colliculus includes the region of connectivity implicated in our independent whole-brain analysis.
[Show abstract][Hide abstract] ABSTRACT: It is widely reported that inverting a face dramatically affects its recognition. Previous studies have shown that face inversion increases the amplitude and delays the latency of the face-specific N170 component of the event-related potential (ERP) and also enhances the amplitude of the occipital P1 component (latency 100-132ms). The present study investigates whether these effects of face inversion can be modulated by visual spatial attention. Participants viewed two streams of visual stimuli, one to the left and one to the right of fixation. One stream consisted of a sequence of alphanumeric characters at 6.67Hz, and the other stream consisted of a series of upright and inverted images of faces and houses presented in randomized order. The participants' task was to attend selectively to one or the other of the streams (during different blocks) in order to detect infrequent target stimuli. ERPs elicited by inverted faces showed larger P1 amplitudes compared to upright faces, but only when the faces were attended. In contrast, the N170 amplitude was larger to inverted than to upright faces only when the faces were not attended. The N170 peak latency was delayed to inverted faces regardless of attention condition. These inversion effects were face specific, as similar effects were absent for houses. These results suggest that early stages of face-specific processing can be enhanced by attention, but when faces are not attended the onset of face-specific processing is delayed until the latency range of the N170.
[Show abstract][Hide abstract] ABSTRACT: In a previous study of visual-spatial attention, Martinez et al. (2007) replicated the well-known finding that stimuli at attended locations elicit enlarged early components in the averaged event-related potential (ERP), which were localized to extrastriate visual cortex. The mechanisms that underlie these attention-related ERP modulations in the latency range of 80-200 ms, however, remain unclear. The main question is whether attention produces increased ERP amplitudes in time-domain averages by augmenting stimulus-triggered neural activity, or alternatively, by increasing the phase-locking of ongoing EEG oscillations to the attended stimuli. We compared these alternative mechanisms using Morlet wavelet decompositions of event-related EEG changes. By analyzing single-trial spectral amplitudes in the theta (4-8 Hz) and alpha (8-12 Hz) bands, which were the dominant frequencies of the early ERP components, it was found that stimuli at attended locations elicited enhanced neural responses in the theta band in the P1 (88-120 ms) and N1 (148-184 ms) latency ranges that were additive with the ongoing EEG. In the alpha band there was evidence for both increased additive neural activity and increased phase-synchronization of the EEG following attended stimuli, but systematic correlations between pre- and post-stimulus alpha activity were more consistent with an additive mechanism. These findings provide the strongest evidence to date in humans that short-latency neural activity elicited by stimuli within the spotlight of spatial attention is boosted or amplified at early stages of processing in extrastriate visual cortex.
[Show abstract][Hide abstract] ABSTRACT: In this chapter, we review studies that have extended the RT-based chronometric investigation of cross-modal spatial attention by utilizing psychophysical measures that better isolate perceptual-level processes. In addition, neurophysiological and neuroimaging methods have been combined with these psychophysical approaches to identify changes in neural activity that might underlie the cross-modal consequences of spatial attention on perception. These methods have also examined neural activity within the cue–target interval that might reflect supramodal (or modality specific) control of spatial attention and subsequent anticipatory biasing of activity within sensory regions of the cortex.
The Neural Bases of Multisensory Processes, Edited by Micah M. Murray, Mark T. Wallace, 01/2012: chapter Chapter 26; CRC Press., ISBN: 9781439812174
[Show abstract][Hide abstract] ABSTRACT: Schizophrenia is associated with perceptual and cognitive dysfunction including impairments in visual attention. These impairments may be related to deficits in early stages of sensory/perceptual processing, particularly within the magnocellular/dorsal visual pathway. In the present study, subjects viewed high and low spatial frequency (SF) gratings designed to test functioning of the parvocellular/magnocellular pathways, respectively. Schizophrenia patients and healthy controls attended to either the low SF (magnocellularly biased) or high SF (parvocellularly biased) gratings. Functional magnetic resonance imaging (fMRI) and recordings of event-related potentials (ERPs) were carried out during task performance. Patients were impaired at detecting low-frequency targets. ERP amplitudes to low-frequency gratings were diminished, both for the early sensory-evoked components and for the attend minus unattend difference component (the selection negativity), which is regarded as a neural index of feature-selective attention. Similarly, fMRI revealed that activity in extrastriate visual cortex was reduced in patients during attention to low, but not high, SF. In contrast, activity in frontal and parietal areas, previously implicated in the control of attention, did not differ between patients and controls. These findings suggest that impaired sensory processing of magnocellularly biased stimuli lead to impairments in the effective processing of attended stimuli, even when the attention control systems themselves are intact.
[Show abstract][Hide abstract] ABSTRACT: An inattentional blindness paradigm was adapted to measure ERPs elicited by visual contour patterns that were or were not consciously perceived. In the first phase of the experiment, subjects performed an attentionally demanding task while task-irrelevant line segments formed square-shaped patterns or random configurations. After the square patterns had been presented 240 times, subjects' awareness of these patterns was assessed. More than half of all subjects, when queried, failed to notice the square patterns and were thus considered inattentionally blind during this first phase. In the second phase of the experiment, the task and stimuli were the same, but following this phase, all of the subjects reported having seen the patterns. ERPs recorded over the occipital pole differed in amplitude from 220 to 260 msec for the pattern stimuli compared with the random arrays regardless of whether subjects were aware of the patterns. At subsequent latencies (300-340 msec) however, ERPs over bilateral occipital-parietal areas differed between patterns and random arrays only when subjects were aware of the patterns. Finally, in a third phase of the experiment, subjects viewed the same stimuli, but the task was altered so that the patterns became task relevant. Here, the same two difference components were evident but were followed by a series of additional components that were absent in the first two phases of the experiment. We hypothesize that the ERP difference at 220-260 msec reflects neural activity associated with automatic contour integration whereas the difference at 300-340 msec reflects visual awareness, both of which are dissociable from task-related postperceptual processing.
Journal of Cognitive Neuroscience 08/2011; 24(2):287-303. · 4.49 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Recordings of event-related potentials (ERPs) were combined with structural and functional magnetic resonance imaging (fMRI) to investigate the timing and localization of stimulus selection processes during visual-spatial attention to pattern-reversing gratings. Pattern reversals were presented in random order to the left and right visual fields at a rapid rate, while subjects attended to the reversals in one field at a time. On separate runs, stimuli were presented in the upper and lower visual quadrants. The earliest ERP component (C1, peaking at around 80 ms), which inverted in polarity for upper versus lower field stimuli and was localized in or near visual area V1, was not modulated by attention. In the latency range 80-250 ms, multiple components were elicited that were increased in amplitude by attention and were colocalized with fMRI activations in specific visual cortical areas. The principal anatomical sources of these attention-sensitive components were localized by fMRI-seeded dipole modeling as follows: P1 (ca. 100 ms-source in motion-sensitive area MT+), C2 (ca. 130 ms-same source as C1), N1a (ca. 145 ms-source in horizontal intraparietal sulcus), N1b (ca. 165 ms-source in fusiform gyrus, area V4/V8), N1c (ca. 180 ms-source in posterior intraparietal sulcus, area V3A), and P2 (ca. 220 ms-multiple sources, including parieto-occipital sulcus, area V6). These results support the hypothesis that spatial attention acts to amplify both feed-forward and feedback signals in multiple visual areas of both the dorsal and ventral streams of processing.
Human Brain Mapping 04/2011; 33(6):1334-51. · 6.88 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Steady-state visual evoked potentials (SSVEPs) were recorded from action videogame players (VGPs) and from non-videogame players (NVGPs) during an attention-demanding task. Participants were presented with a multi-stimulus display consisting of rapid sequences of alphanumeric stimuli presented at rates of 8.6/12 Hz in the left/right peripheral visual fields, along with a central square at fixation flashing at 5.5 Hz and a letter sequence flashing at 15 Hz at an upper central location. Subjects were cued to attend to one of the peripheral or central stimulus sequences and detect occasional targets. Consistent with previous behavioral studies, VGPs detected targets with greater speed and accuracy than NVGPs. This behavioral advantage was associated with an increased suppression of SSVEP amplitudes to unattended peripheral sequences in VGPs relative to NVGPs, whereas the magnitude of the attended SSVEPs was equivalent in the two groups. Group differences were also observed in the event-related potentials to targets in the alphanumeric sequences, with the target-elicited P300 component being of larger amplitude in VGPS than NVGPs. These electrophysiological findings suggest that the superior target detection capabilities of the VGPs are attributable, at least in part, to enhanced suppression of distracting irrelevant information and more effective perceptual decision processes.
Journal of Neuroscience 01/2011; 31(3):992-8. · 6.91 Impact Factor