Preattentive Binding of Auditory and Visual Stimulus Features

Institute for Psychology, Hungarian Academy of Sciences, Budapest, Hungary.
Journal of Cognitive Neuroscience (Impact Factor: 4.09). 03/2005; 17(2):320-39. DOI: 10.1162/0898929053124866
Source: PubMed


We investigated the role of attention in feature binding in the auditory and the visual modality. One auditory and one visual experiment used the mismatch negativity (MMN and vMMN, respectively) event-related potential to index the memory representations created from stimulus sequences, which were either task-relevant and, therefore, attended or task-irrelevant and ignored. In the latter case, the primary task was a continuous demanding within-modality task. The test sequences were composed of two frequently occurring stimuli, which differed from each other in two stimulus features (standard stimuli) and two infrequently occurring stimuli (deviants), which combined one feature from one standard stimulus with the other feature of the other standard stimulus. Deviant stimuli elicited MMN responses of similar parameters across the different attentional conditions. These results suggest that the memory representations involved in the MMN deviance detection response encoded the frequently occurring feature combinations whether or not the test sequences were attended. A possible alternative to the memory-based interpretation of the visual results, the elicitation of the McCollough color-contingent aftereffect, was ruled out by the results of our third experiment. The current results are compared with those supporting the attentive feature integration theory. We conclude that (1) with comparable stimulus paradigms, similar results have been obtained in the two modalities, (2) there exist preattentive processes of feature binding, however, (3) conjoining features within rich arrays of objects under time pressure and/or longterm retention of the feature-conjoined memory representations may require attentive processes.

Download full-text


Available from: Lászlo Balázs, Oct 08, 2015
25 Reads
  • Source
    • "Deviant in attentional blink position Berti, 2011 Central task, independent of the sequence of vMMN-related stimuli Czigler et al., 2002, 2004, 2006a,b; Lorenzo-López et al., 2004; Pazo-Alvarez et al., 2004a,b; Besle et al., 2005, 2007; Winkler et al., 2005; Amenedo et al., 2007; Czigler and Pató, 2009; Flynn et al., 2009, Experiment 2; Kimura et al., 2010a; Müller et al., 2010, 2012; Urakawa et al., 2010a,b; Qiu et al., 2011; Stefanics et al., 2011, 2012; Stefanics and Czigler, 2012; Cléry et al., 2013a,b; Kecskés-Kovács et al., 2013b; Kimura and Takeda, 2013; Kremláček et al., 2013; Shi et al., 2013; van Rhijn et al., 2013; Kovács-Bálint et al., 2014; Si et al., 2014; Sulykos and Czigler, 2014 Central task with the standard and/or deviant of the vMMN-related stimuli Kenemans et al., 2003, 2010; Kimura et al., 2006a, 2010b— " independent " condition; Grimm et al., 2009; Clifford et al., 2010; Mo et al., 2011; Cleary et al., 2013; Kuldkepp et al., 2013; Shtyrov et al., 2013; Stothart and Kazanina, 2013; Tang et al., 2013 Central task, within the sequence of vMMN-related stimuli Tales et al., 1999, 2002, 2008, 2009; Stagg et al., 2004; Maekawa et al.*, 2005; 2009; 2011; Kimura et al., 2006b,2010c; Kremláček et al., 2006; Tales and Butler, 2006; Fonteneau and Davidoff, 2007; Hosák et al., 2008; Liu and Shi, 2008; Urban et al., 2008; Athanasopoulos et al., 2010; Chang et al., 2010, 2011; Froyen et al., 2010; Susac et al., 2010; Kimura, 2012; Files et al., 2013; Fujimura and Okanoya, 2013; Kreegipuu et al., 2013; Maekawa et al., 2013; Wang et al., 2013. *Together with an auditory task. "
    [Show abstract] [Hide abstract]
    ABSTRACT: An increasing number of studies investigate the visual mismatch negativity (vMMN) or use the vMMN as a tool to probe various aspects of human cognition. This paper reviews the theoretical underpinnings of vMMN in the light of methodological considerations and provides recommendations for measuring and interpreting the vMMN. The following key issues are discussed from the experimentalist's point of view in a predictive coding framework: (1) experimental protocols and procedures to control "refractoriness" effects; (2) methods to control attention; (3) vMMN and veridical perception.
    Frontiers in Human Neuroscience 09/2014; 8:666. DOI:10.3389/fnhum.2014.00666 · 2.99 Impact Factor
  • Source
    • "To test our hypothesis, we recorded event-related brain potentials (ERPs), and measured the mismatch negativity (MMN) component. The MMN is an appropriate tool because it can index sound change detection irrespective of the direction of attention (Sussman et al., 2003; Winkler et al., 2005). MMN is generated bilaterally within auditory cortices and the negative waveform is observed with a fronto-central scalp distribution that inverts in polarity at the mastoid electrodes (Vaughan and Ritter, 1970; Giard et al., 2014). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Stream segregation is the process by which the auditory system disentangles the mixture of sound inputs into discrete sources that cohere across time. The length of time required for this to occur is termed the "buildup" period. In the current study, we used the buildup period as an index of how quickly sounds are segregated into constituent parts. Specifically, we tested the hypothesis that stimulus context impacts the timing of the buildup and, therefore, affects when stream segregation is detected. To measure the timing of the buildup we recorded the Mismatch Negativity component (MMN) of event-related brain potentials (ERPs), during passive listening, to determine when the streams were neurophysiologically segregated. In each condition, a pattern of repeating low (L) and high (H) tones (L-L-H) was presented in trains of stimuli separated by silence, with the H tones forming a simple intensity oddball paradigm and the L tones serving as distractors. To determine the timing of the buildup, probe tones occurred in two positions of the trains, early (within the buildup period) and late (past the buildup period). The context was manipulated by presenting roving vs. non-roving frequencies across trains in two conditions. MMNs were elicited by intensity probe tones in the Non-Roving condition (early and late positions) and the Roving condition (late position only) indicating that neurophysiologic segregation occurred faster in the Non-Roving condition. This suggests a shorter buildup period when frequency was repeated from train to train. Overall, our results demonstrate that the dynamics of the environment influence the way in which the auditory system extracts regularities from the input. The results support the hypothesis that the buildup to segregation is highly dependent upon stimulus context and that the auditory system works to maintain a consistent representation of the environment when no new information suggests that reanalyzing the scene is necessary.
    Frontiers in Neuroscience 04/2014; 8(8):93. DOI:10.3389/fnins.2014.00093 · 3.66 Impact Factor
  • Source
    • "As for the classic MMN, vMMN has been documented for violations of visual regularities in several elementary visual features, including motion direction (Kremláček et al., 2006; Pazo-Alvarez et al., 2004), forms in motion (Besle et al., 2005), stimulus orientation (Astikainen et al., 2008), spatial frequency (Maekawa et al., 2005) and color (Czigler et al., 2004). In addition , vMMN has been revealed in response to higher order visual violations , such as the conjunction of visual features like color and moving direction (Winkler et al., 2005), the violation of temporal regularities (Czigler et al., 2006b), the omission of visual stimuli within an otherwise regular stream (Czigler et al., 2006a), the appearance of an irregular change in the visual stream instead of a regular one (Czigler et al., 2006a), changes in facial expressions (Astikainen and Hietanen, 2009; Chang et al., 2010; Gayle et al., 2012; Stefanics et al., 2012; Susac et al., 2010; Zhao and Li, 2006), changes in symmetry (Kecskés-Kovács et al., 2013b), presentation of left vs. right hand stimuli (Stefanics and Czigler, 2012) or changes in the gender of a face (Kecskés-Kovács et al., 2013a). Differently from the auditory MMN, the vMMN is usually recorded over occipital electrodes or parieto-occipital electrodes (Maekawa et al., 2005) in the 200–400 ms latency range. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Although cross-modal recruitment of early sensory areas in deafness and blindness is well established, the constraints and limits of these plastic changes remain to be understood. In the case of human deafness, for instance, it is known that visual, tactile or visuo-tactile stimuli can elicit a response within the auditory cortices. Nonetheless, both the timing of these evoked responses and the functional contribution of cross-modally recruited areas remain to be ascertained. In the present study, we examined to what extent auditory cortices of deaf humans participate in high-order visual processes, such as visual change detection. By measuring visual ERPs, in particular the visual MisMatch Negativity (vMMN), and performing source localization, we show that individuals with early deafness (N=12) recruit the auditory cortices when a change in motion direction during shape deformation occurs in a continuous visual motion stream. Remarkably this "auditory" response for visual events emerged with the same timing as the visual MMN in hearing controls (N=12), between 150 and 300ms after the visual change. Furthermore, the recruitment of auditory cortices for visual change detection in early deaf was paired with a reduction of response within the visual system, indicating a shift from visual to auditory cortices of part of the computational process. The present study suggests that the deafened auditory cortices participate at extracting and storing the visual information and at comparing on-line the upcoming visual events, thus indicating that cross-modally recruited auditory cortices can reach this level of computation.
    NeuroImage 03/2014; 94. DOI:10.1016/j.neuroimage.2014.02.031 · 6.36 Impact Factor
Show more