Preattentive Binding of Auditory and Visual Stimulus Features

University of Helsinki, Helsinki, Uusimaa, Finland
Journal of Cognitive Neuroscience (Impact Factor: 4.09). 03/2005; 17(2):320-39. DOI: 10.1162/0898929053124866
Source: PubMed


We investigated the role of attention in feature binding in the auditory and the visual modality. One auditory and one visual experiment used the mismatch negativity (MMN and vMMN, respectively) event-related potential to index the memory representations created from stimulus sequences, which were either task-relevant and, therefore, attended or task-irrelevant and ignored. In the latter case, the primary task was a continuous demanding within-modality task. The test sequences were composed of two frequently occurring stimuli, which differed from each other in two stimulus features (standard stimuli) and two infrequently occurring stimuli (deviants), which combined one feature from one standard stimulus with the other feature of the other standard stimulus. Deviant stimuli elicited MMN responses of similar parameters across the different attentional conditions. These results suggest that the memory representations involved in the MMN deviance detection response encoded the frequently occurring feature combinations whether or not the test sequences were attended. A possible alternative to the memory-based interpretation of the visual results, the elicitation of the McCollough color-contingent aftereffect, was ruled out by the results of our third experiment. The current results are compared with those supporting the attentive feature integration theory. We conclude that (1) with comparable stimulus paradigms, similar results have been obtained in the two modalities, (2) there exist preattentive processes of feature binding, however, (3) conjoining features within rich arrays of objects under time pressure and/or longterm retention of the feature-conjoined memory representations may require attentive processes.

Download full-text


Available from: Lászlo Balázs
  • Source
    • "Note that the fact the feature-regularities can be extracted from a sequence of sound even when other features vary (e.g., Gomes, Ritter & parallel MMNs (e.g., Takegata, Paavilainen, Näätänen & Winkler, 2001; Wolff & Schröger, 1995) does not contradict the idea of unitary representations underlying MMN, because featural information can be extracted from the unitary traces (for detailed discussion, see Ritter et al., 1995; Winkler, 2007). In the auditory modality, features are conjoined irrespective of the direction of focused attention (Takegata et al., 2005; Winkler et al., 2005). Thus the common fate of different features effect found in the current study did not require participants to focus on the sounds, which speaks to the strong influence of the firstimpression that creates the primacy bias pattern in MMN. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Although first-impressions are known to impact decision-making and to have prolonged effects on reasoning, it is less well known that the same type of rapidly formed assumptions can explain biases in automatic relevance filtering outside of deliberate behaviour. This paper features two studies in which participants have been asked to ignore sequences of sound while focusing attention on a silent movie. The sequences consisted of blocks, each with a high-probability repetition interrupted by rare acoustic deviations (i.e., a sound of different pitch or duration). The probabilities of the two different sounds alternated across the concatenated blocks within the sequence (i.e., short-to-long and long-to-short). The sound probabilities are rapidly and automatically learned for each block and a perceptual inference is formed predicting the most likely characteristics of the upcoming sound. Deviations elicit a prediction-error signal known as mismatch negativity (MMN). Computational models of MMN generally assume that its elicitation is governed by transition statistics that define what sound attributes are most likely to follow the current sound. MMN amplitude reflects prediction confidence, which is derived from the stability of the current transition statistics. However, our prior research showed that MMN amplitude is modulated by a strong first-impression bias that outweighs transition statistics. Here we test the hypothesis that this bias can be attributed to assumptions about predictable versus unpredictable nature of each tone within the first encountered context, which is weighted by the stability of that context. The results of Study 1 show that this bias is initially prevented if there is no 1:1 mapping between sound attributes and probability, but it returns once the auditory system determines which properties provide the highest predictive value. The results of Study 2 show that confidence in the first-impression bias drops if assumptions about the temporal stability of the transition-statistics are violated. Both studies provide compelling evidence that the auditory system extrapolates patterns on multiple timescales to adjust its response to prediction-errors, while profoundly distorting the effects of transition-statistics by the assumptions formed on the basis of first-impressions.
    Full-text · Article · Feb 2016 · Biological psychology
  • Source
    • "Deviant in attentional blink position Berti, 2011 Central task, independent of the sequence of vMMN-related stimuli Czigler et al., 2002, 2004, 2006a,b; Lorenzo-López et al., 2004; Pazo-Alvarez et al., 2004a,b; Besle et al., 2005, 2007; Winkler et al., 2005; Amenedo et al., 2007; Czigler and Pató, 2009; Flynn et al., 2009, Experiment 2; Kimura et al., 2010a; Müller et al., 2010, 2012; Urakawa et al., 2010a,b; Qiu et al., 2011; Stefanics et al., 2011, 2012; Stefanics and Czigler, 2012; Cléry et al., 2013a,b; Kecskés-Kovács et al., 2013b; Kimura and Takeda, 2013; Kremláček et al., 2013; Shi et al., 2013; van Rhijn et al., 2013; Kovács-Bálint et al., 2014; Si et al., 2014; Sulykos and Czigler, 2014 Central task with the standard and/or deviant of the vMMN-related stimuli Kenemans et al., 2003, 2010; Kimura et al., 2006a, 2010b— " independent " condition; Grimm et al., 2009; Clifford et al., 2010; Mo et al., 2011; Cleary et al., 2013; Kuldkepp et al., 2013; Shtyrov et al., 2013; Stothart and Kazanina, 2013; Tang et al., 2013 Central task, within the sequence of vMMN-related stimuli Tales et al., 1999, 2002, 2008, 2009; Stagg et al., 2004; Maekawa et al.*, 2005; 2009; 2011; Kimura et al., 2006b,2010c; Kremláček et al., 2006; Tales and Butler, 2006; Fonteneau and Davidoff, 2007; Hosák et al., 2008; Liu and Shi, 2008; Urban et al., 2008; Athanasopoulos et al., 2010; Chang et al., 2010, 2011; Froyen et al., 2010; Susac et al., 2010; Kimura, 2012; Files et al., 2013; Fujimura and Okanoya, 2013; Kreegipuu et al., 2013; Maekawa et al., 2013; Wang et al., 2013. *Together with an auditory task. "
    [Show abstract] [Hide abstract]
    ABSTRACT: An increasing number of studies investigate the visual mismatch negativity (vMMN) or use the vMMN as a tool to probe various aspects of human cognition. This paper reviews the theoretical underpinnings of vMMN in the light of methodological considerations and provides recommendations for measuring and interpreting the vMMN. The following key issues are discussed from the experimentalist's point of view in a predictive coding framework: (1) experimental protocols and procedures to control "refractoriness" effects; (2) methods to control attention; (3) vMMN and veridical perception.
    Full-text · Article · Sep 2014 · Frontiers in Human Neuroscience
  • Source
    • "To test our hypothesis, we recorded event-related brain potentials (ERPs), and measured the mismatch negativity (MMN) component. The MMN is an appropriate tool because it can index sound change detection irrespective of the direction of attention (Sussman et al., 2003; Winkler et al., 2005). MMN is generated bilaterally within auditory cortices and the negative waveform is observed with a fronto-central scalp distribution that inverts in polarity at the mastoid electrodes (Vaughan and Ritter, 1970; Giard et al., 2014). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Stream segregation is the process by which the auditory system disentangles the mixture of sound inputs into discrete sources that cohere across time. The length of time required for this to occur is termed the "buildup" period. In the current study, we used the buildup period as an index of how quickly sounds are segregated into constituent parts. Specifically, we tested the hypothesis that stimulus context impacts the timing of the buildup and, therefore, affects when stream segregation is detected. To measure the timing of the buildup we recorded the Mismatch Negativity component (MMN) of event-related brain potentials (ERPs), during passive listening, to determine when the streams were neurophysiologically segregated. In each condition, a pattern of repeating low (L) and high (H) tones (L-L-H) was presented in trains of stimuli separated by silence, with the H tones forming a simple intensity oddball paradigm and the L tones serving as distractors. To determine the timing of the buildup, probe tones occurred in two positions of the trains, early (within the buildup period) and late (past the buildup period). The context was manipulated by presenting roving vs. non-roving frequencies across trains in two conditions. MMNs were elicited by intensity probe tones in the Non-Roving condition (early and late positions) and the Roving condition (late position only) indicating that neurophysiologic segregation occurred faster in the Non-Roving condition. This suggests a shorter buildup period when frequency was repeated from train to train. Overall, our results demonstrate that the dynamics of the environment influence the way in which the auditory system extracts regularities from the input. The results support the hypothesis that the buildup to segregation is highly dependent upon stimulus context and that the auditory system works to maintain a consistent representation of the environment when no new information suggests that reanalyzing the scene is necessary.
    Full-text · Article · Apr 2014 · Frontiers in Neuroscience
Show more