The Chronometry of Attention‐Modulated Processing and Automatic Mismatch Detection

Department of Neuroscience, Rose kennedy Center. Albert Einstein College of Medicine
Psychophysiology (Impact Factor: 3.18). 06/1992; 29(4):412 - 430. DOI: 10.1111/j.1469-8986.1992.tb01714.x

ABSTRACT Event-related potentials were recorded from normal subjects in an auditory selective attention task. Targets were rare longer (170-ms) tones of a designated pitch, embedded in a sequence of 100-ms standard tones. The effects of attention-modulated processing were evident in the event-related potentials elicited by the standards. Those to relevant standards were similar for easy (1000 Hz vs. 2000 Hz) and hard (1000 Hz vs. 1030 Hz) pitch separations, and were more negative frontocentrally than those to irrelevant standards. Difference waveforms (attended minus unattended standards) revealed Nd, a negative deflection that was earlier in latency for the easy task (onset, 120 ms; peak, 250 ms) than for the hard task (onset, 250 ms; peak, 350 ms). The speed of detection of the deviant longer tones was insensitive to the attention-modulated processes indexed by Nd. Median reaction time did not differ between tasks, although there were more misses and false alarms in the hard task (and nearly all of the latter were to the irrelevant longer tones). Neither direction of attention nor task difficulty affected the latency of mismatch negativity, N2, or P3 (as identified in difference waveforms: attended or unattended longer tones). minus their respective standards). The data suggest that performance was guided by two independent but converging processes, automatic mismatch detection of the longer tone and attention-modulated processing of pitch, followed by selection of response.

1 Follower
  • [Show abstract] [Hide abstract]
    ABSTRACT: Abstract Event-related potentials were recorded to tones that subjects ignored while reading a book of their choosing. In all conditions, 90% of the tones were 100 msec in duration and 10% of the tones were 170 msec in duration. In a control condition, a customary oddball paradigm was used in which all of the tones were identical except for the longer duration tones. In two conditions, the tones varied over a wide range of tonal frequencies from 700 to 2050 Hz in 10 steps of 150 Hz. In another condition, the tones varied over the same frequencies but also varied in intensity from about 60 to 87 dB in steps of 3 dB. Thus, there was no "standard" tone in the sense of a frequently presented tone that had identical stimulus features. A mismatch negativity (MMN) was elicited in all conditions. The data are discussed in terms of the storage of information in the memory upon which the MMN is based.
    Journal of Cognitive Neuroscience 01/1995; 7(1):81-94. DOI:10.1162/jocn.1995.7.1.81 · 4.69 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Speech perception in noise is still difficult for cochlear implant (CI) users even with many years of CI use. This study aimed to investigate neurophysiological and behavioral foundations for CI-dependent speech perception in noise. Seventeen post-lingual CI users and twelve age-matched normal hearing adults participated in two experiments. In Experiment 1, CI users' auditory-only word perception in noise (white noise, two-talker babble; at 10 dB SNR) degraded by about 15 %, compared to that in quiet (48 % accuracy). CI users’ auditory-visual word perception was generally better than auditory-only perception. Auditory-visual word perception was degraded under information masking by the two-talker noise (69 % accuracy), compared to that in quiet (77 %). Such degradation was not observed for white noise (77 %), suggesting that the overcoming of information masking is an important issue for CI users’ speech perception improvement. In Experiment 2, event-related cortical potentials were recorded in an auditory oddball task in quiet and noise (white noise only). Similarly to the normal hearing participants, the CI users showed the mismatch negative response (MNR) to deviant speech in quiet, indicating automatic speech detection. In noise, the MNR disappeared in the CI users, and only the good CI performers (above 66 % accuracy) showed P300 (P3) like the normal hearing participants. P3 amplitude in the CI users was positively correlated with speech perception scores. These results suggest that CI users’ difficulty in speech perception in noise is associated with the lack of automatic speech detection indicated by the MNR. Successful performance in noise may begin with attended auditory processing indicated by P3.
    Hearing Research 10/2014; 316. DOI:10.1016/j.heares.2014.08.001 · 2.85 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Research suggests that infants may be sensitive to the prosodic structure of their native language at an earlier age than the segmental structure. In adults, the right cerebral hemisphere is more involved than the left in processing certain types of prosodic information. A hypothesis derived from these 2 research findings is that similar right hemisphere specialization for prosodic information would be found in infants. Event‐related potentials (ERPs) recorded to tone probes superimposed on English and Italian passages (languages with different prosodic structure) and on English and Dutch passages (languages with similar prosodic structure) were used to test this hypothesis in 3‐month‐old infants (n = 24). Significant differences in ERP amplitude measures indicated that both left and right cerebral hemispheres were sensitive to differences between English and the 2 foreign languages, and that both play a role in processing speech in the early stages of language acquisition.
    Developmental Neuropsychology 01/1999; 15(1):73-109. DOI:10.1080/87565649909540740 · 2.67 Impact Factor