Article

Validation of the Cochlear Implant Artifact Correction tool for auditory electrophysiology

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Continuous EEG activity was recorded from each listener using the Advanced Neuro Technology EEG system and a 64 channel Waveguard Cap (Rao et al. 2010;Miller & Zhang 2014). The Ag/AgCl electrodes on the cap were arranged in the standard 10-20 system with additional intermediate positions, and the ground electrode was located at the AFz position. ...
... Data were band-pass filtered from 0.5 to 20 Hz using FIR filters in a sequential order (high pass first, followed by low-pass filter) and then down-sampled to 500 Hz (MATLAB). Due to the presence of CI-related artifacts in the EEG signal, the blind source separation Infomax ICA algorithm (Bell & Sejnowski 1995) was applied to the data, with the initial number of components in the ICA matrix reflecting the number of electrodes used during the recordings for each subject (Miller & Zhang 2014). Electrodes that had been deactivated during the recordings were interpolated after performing the ICA. ...
... Independent components representing CI artifacts were identified and manually removed from each subject's ICA matrix using temporal and spatial criteria outlined by Gilley et al. (2006) and previously used in Miller and Zhang (2014). Components were defined as CI artifacts and subsequently removed if the artifact activation occurred at either stimulus onset or offset or if the duration of the activation was the same as the duration of the stimulus. ...
Article
Objective: The present training study aimed to examine the fine-scale behavioral and neural correlates of phonetic learning in adult postlingually deafened cochlear implant (CI) listeners. The study investigated whether high variability identification training improved phonetic categorization of the /ba/-/da/ and /wa/-/ja/ speech contrasts and whether any training-related improvements in phonetic perception were correlated with neural markers associated with phonetic learning. It was hypothesized that training would sharpen phonetic boundaries for the speech contrasts and that changes in behavioral sensitivity would be associated with enhanced mismatch negativity (MMN) responses to stimuli that cross a phonetic boundary relative to MMN responses evoked using stimuli from the same phonetic category. Design: A computer-based training program was developed that featured multitalker variability and adaptive listening. The program was designed to help CI listeners attend to the important second formant transition cue that categorizes the /ba/-/da/ and /wa/-/ja/ contrasts. Nine adult CI listeners completed the training and 4 additional CI listeners that did not undergo training were included to assess effects of procedural learning. Behavioral pre-post tests consisted of identification and discrimination of the synthetic /ba/-/da/ and /wa/-/ja/ speech continua. The electrophysiologic MMN response elicited by an across phoneme category pair and a within phoneme category pair that differed by an acoustically equivalent amount was derived at pre-post test intervals for each speech contrast as well. Results: Training significantly enhanced behavioral sensitivity across the phonetic boundary and significantly altered labeling of the stimuli along the /ba/-/da/ continuum. While training only slightly altered identification and discrimination of the /wa/-/ja/ continuum, trained CI listeners categorized the /wa/-/ja/ contrast more efficiently than the /ba/-/da/ contrast across pre-post test sessions. Consistent with behavioral results, pre-post EEG measures showed the MMN amplitude to the across phoneme category pair significantly increased with training for both the /ba/-/da/ and /wa/-/ja/ contrasts, but the MMN was unchanged with training for the corresponding within phoneme category pairs. Significant brain-behavior correlations were observed between changes in the MMN amplitude evoked by across category phoneme stimuli and changes in the slope of identification functions for the trained listeners for both speech contrasts. Conclusions: The brain and behavior data of the present study provide evidence that substantial neural plasticity for phonetic learning in adult postlingually deafened CI listeners can be induced by high variability identification training. These findings have potential clinical implications related to the aural rehabilitation process following receipt of a CI device.
... It is well established that the obligatory P1-N1-P2 complex is sensitive to the spectrotemporal features of acoustic stimuli and can be elicited by speech sounds (see Martin et al., 2008, for a review). For example, acoustic features of consonants such as voice onset time (Digeser et al., 2009;Sharma et al., 2000;Zaehle et al., 2007), place of articulation (Tavabi et al., 2007), and manner of articulation (Hari, 1991;Miller & Zhang, 2014;Zhang et al., 2005) are differentially reflected in auditory ERP responses. Previous work has also demonstrated cortical responses differ when elicited by tonal versus vowel and consonant stimuli (Ceponiene et al., 2005;Ceponiene et al., 2001;Woods & Elmasian, 1986). ...
... To match the duration characteristics across stimulus types, the stimulus was digitally edited to 170 ms (Sony Sound Forge 9.0, Sony Creative Software) using temporal stretching and shrinking via the Pitch Synchronous Overlap-Add technique (Moulines & Charpentier, 1990). The edited consonant duration was 60 ms, and the edited vowel duration was 110 ms (Miller & Zhang, 2014;Miller et al., 2016b;Sharma et al., 2000). ...
Article
Purpose Auditory sensory gating is a neural measure of inhibition and is typically measured with a click or tonal stimulus. This electrophysiological study examined if stimulus characteristics and the use of speech stimuli affected auditory sensory gating indices. Method Auditory event-related potentials were elicited using natural speech, synthetic speech, and nonspeech stimuli in a traditional auditory gating paradigm in 15 adult listeners with normal hearing. Cortical responses were recorded at 64 electrode sites, and peak amplitudes and latencies to the different stimuli were extracted. Individual data were analyzed using repeated-measures analysis of variance. Results Significant gating of P1–N1–P2 peaks was observed for all stimulus types. N1–P2 cortical responses were affected by stimulus type, with significantly less neural inhibition of the P2 response observed for natural speech compared to nonspeech and synthetic speech. Conclusions Auditory sensory gating responses can be measured using speech and nonspeech stimuli in listeners with normal hearing. The results of the study indicate the amount of gating and neural inhibition observed is affected by the spectrotemporal characteristics of the stimuli used to evoke the neural responses.
... In the presence of auditory stimuli, the electrical stimulation and radio-frequency signals of CIs impart electrical stimulation artifacts into the EEG recording (Wagner et al., 2018). While ICA has been previously used to identify CI artifacts in EEG recordings involving auditory evoked potentials (Gilley et al., 2006;Miller and Zhang, 2014), the nature of CI artifacts makes their reduction more challenging, especially for continuous tasks. In general, ICA separates statistically independent components from the mixed signal. ...
Article
Full-text available
There is a weak relationship between clinical and self-reported speech perception outcomes in cochlear implant (CI) listeners. Such poor correspondence may be due to differences in clinical and “real-world” listening environments and stimuli. Speech in the real world is often accompanied by visual cues, background environmental noise, and is generally in a conversational context, all factors that could affect listening demand. Thus, our objectives were to determine if brain responses to naturalistic speech could index speech perception and listening demand in CI users. Accordingly, we recorded high-density electroencephalogram (EEG) while CI users listened/watched a naturalistic stimulus (i.e., the television show, “The Office”). We used continuous EEG to quantify “speech neural tracking” (i.e., TRFs, temporal response functions) to the show’s soundtrack and 8–12 Hz (alpha) brain rhythms commonly related to listening effort. Background noise at three different signal-to-noise ratios (SNRs), +5, +10, and +15 dB were presented to vary the difficulty of following the television show, mimicking a natural noisy environment. The task also included an audio-only (no video) condition. After each condition, participants subjectively rated listening demand and the degree of words and conversations they felt they understood. Fifteen CI users reported progressively higher degrees of listening demand and less words and conversation with increasing background noise. Listening demand and conversation understanding in the audio-only condition was comparable to that of the highest noise condition (+5 dB). Increasing background noise affected speech neural tracking at a group level, in addition to eliciting strong individual differences. Mixed effect modeling showed that listening demand and conversation understanding were correlated to early cortical speech tracking, such that high demand and low conversation understanding occurred with lower amplitude TRFs. In the high noise condition, greater listening demand was negatively correlated to parietal alpha power, where higher demand was related to lower alpha power. No significant correlations were observed between TRF/alpha and clinical speech perception scores. These results are similar to previous findings showing little relationship between clinical speech perception and quality-of-life in CI users. However, physiological responses to complex natural speech may provide an objective measure of aspects of quality-of-life measures like self-perceived listening demand.
... However, one of the most challenging tasks in CI-based AEP research is to eliminate the CI-induced electrical artifact in AEP recordings. While there have been recent attempts in developing techniques that could aid in removing the CI-induced artifacts, [69][70][71] more research is needed to bring the CI-based EEG in the mainstream research and clinics. While the CI-induced electrical artifact is a problem for the scalp-recorded acoustical stimuli-evoked potentials, electrical stimuli-evoked potentials do not usually present themselves with such a problem. ...
Article
This article provides a brief overview of auditory evoked potentials (AEPs) and their application in the areas of research and clinics within the field of communication disorders. The article begins with providing a historical perspective within the context of the key scientific developments that led to the emergence of numerous types of AEPs. Furthermore, the article discusses the different AEP techniques in the light of their feasibility in clinics. As AEPs, because of their versatility, find their use across disciplines, this article also discusses some of the research questions that are currently being addressed using AEP techniques in the field of communication disorders and beyond. At the end, this article summarizes the shortcomings of the existing AEP techniques and provides a general perspective toward the future directions. The article is aimed at a broad readership including (but not limited to) students, clinicians, and researchers. Overall, this article may act as a brief primer for the new AEP users, and as an overview of the progress in the field of AEPs along with future directions, for those who already use AEPs on a routine basis.
... The CI stimulation artifacts could cause the erroneous detection of neural responses and could distort response properties . The removal of this unintentionally recorded stimulus artifact from the mixture has been proven to be a difficult task (Bahmer et al., 2008; Brown et al., 1994 Brown et al., , 2000 Hay-McCutcheon et al., 2002; Miller & Zhang, 2014; Undurraga et al., 2013). The eABR postprocessing procedure mainly aimed at removing the CI stimulus artifact, as well as at improving the eABR signal-to-noise-ratio. ...
Data
In patients with bilateral cochlear implants (CIs), pairing matched interaural electrodes and stimulating them with the same frequency band is expected to facilitate binaural functions such as binaural fusion, localization, and spatial release from masking. Because clinical procedures typically do not include patient-specific interaural electrode pairing, it remains the case that each electrode is allocated to a generic frequency range, based simply on the electrode number. Two psychoacoustic techniques for determining interaurally paired electrodes have been demonstrated in several studies: interaural pitch comparison and interaural time difference (ITD) sensitivity. However, these two methods are rarely, if ever, compared directly. A third, more objective method is to assess the amplitude of the binaural interaction component (BIC) derived from electrically evoked auditory brainstem responses for different electrode pairings; a method has been demonstrated to be a potential candidate for bilateral CI users. Here, we tested all three measures in the same eight CI users. We found good correspondence between the electrode pair producing the largest BIC and the electrode pair producing the maximum ITD sensitivity. The correspondence between the pairs producing the largest BIC and the pitch-matched electrode pairs was considerably weaker, supporting the previously proposed hypothesis that whilst place pitch might adapt over time to accommodate mismatched inputs, sensitivity to ITDs does not adapt to the same degree.
... Key EEG studies have argued that late auditory evoked potentials provide a useful objective metric of performance in participants hearing through a CI (Firszt et al. 2002;Zhang et al. 2010Zhang et al. , 2011. However, EEG does have a disadvantage in that CIs produces electrical noise that can interfere with recordings when long-duration speech stimuli are used although artifact removal techniques can be used to minimize this issue (Viola et al. 2012;Mc Laughlin et al. 2013;Miller & Zhang 2014). ...
Article
Full-text available
Objectives: Cochlear implants are a standard therapy for deafness, yet the ability of implanted patients to understand speech varies widely. To better understand this variability in outcomes, the authors used functional near-infrared spectroscopy to image activity within regions of the auditory cortex and compare the results to behavioral measures of speech perception. Design: The authors studied 32 deaf adults hearing through cochlear implants and 35 normal-hearing controls. The authors used functional near-infrared spectroscopy to measure responses within the lateral temporal lobe and the superior temporal gyrus to speech stimuli of varying intelligibility. The speech stimuli included normal speech, channelized speech (vocoded into 20 frequency bands), and scrambled speech (the 20 frequency bands were shuffled in random order). The authors also used environmental sounds as a control stimulus. Behavioral measures consisted of the speech reception threshold, consonant-nucleus-consonant words, and AzBio sentence tests measured in quiet. Results: Both control and implanted participants with good speech perception exhibited greater cortical activations to natural speech than to unintelligible speech. In contrast, implanted participants with poor speech perception had large, indistinguishable cortical activations to all stimuli. The ratio of cortical activation to normal speech to that of scrambled speech directly correlated with the consonant-nucleus-consonant words and AzBio sentences scores. This pattern of cortical activation was not correlated with auditory threshold, age, side of implantation, or time after implantation. Turning off the implant reduced the cortical activations in all implanted participants. Conclusions: Together, these data indicate that the responses the authors measured within the lateral temporal lobe and the superior temporal gyrus correlate with behavioral measures of speech perception, demonstrating a neural basis for the variability in speech understanding outcomes after cochlear implantation.
... For this reason, the average reference may not be suitable for all components of the AEP study. To reduce electrical artifacts, it is well accepted that contralateral mastoid as the reference electrode is one of the best references for AEPs of CI users (He et al., 2012;Mc Laughlin et al., 2012Miller and Zhang, 2014). Meanwhile, our results showed that different references for the AEP study do not affect the CI artifact. ...
Article
Full-text available
Background: Nose reference (NR), mastoid reference (MR), and montage average reference (MAR) are usually used in auditory event-related potential (AEP) studies with a recently developed reference electrode standardization technique (REST), which may reduce the reference effect. For children with cochlear implants (CIs), auditory deprivation may hinder normal development of the auditory cortex, and the reference effect may be different between CIs and a normal developing group. Methods: Thirteen right-side-CI children were recruited, comprising 7 males and 6 females, ages 2–5 years, with CI usage of ~1 year. Eleven sex- and age-matched healthy children were recruited for normal controls; 1,000 Hz pure tone evoked AEPs were recorded, and the data were re-referenced to NR, left mastoid reference (LMR, which is the opposite side of the implanted cochlear), MAR, and REST. CI artifact and P1–N1 complex (latency, amplitudes) at Fz were analyzed. Results: Confirmed P1–N1 complex could be found in Fz using NR, LMR, MAR, and REST with a 128-electrode scalp. P1 amplitude was larger using LMR than MAR and NR, while no statistically significant difference was found between NR and MAR in the CI group; REST had no significant difference with the three other references. In the control group, no statistically significant difference was found with different references. Group difference of P1 amplitude could be found when using MR, MAR, and REST. For P1 latency, no significant difference among the four references was shown, whether in the CI or control group. Group difference in P1 latency could be found in MR and MAR. N1 amplitude in LMR was significantly lower than NR and MAR in the control group. LMR, MAR, and REST could distinguish the difference in the N1 amplitude between the CI and control group. Contralateral MR or MAR was found to be better in differentiating CI children versus controls. No group difference was found for the artifact component. Conclusions: Different references for AEP studies do not affect the CI artifact. In addition, contralateral MR is preferable for P1–N1 component studies involving CI children, as well as methodology-like studies.
... To remove electrooculograpic (EOG) artifacts, a blind source sepa- ration Infomax independent component analysis (ICA) algorithm (Bell and Sejnowski, 1995) was applied to the data. Based on spatial and temporal criteria, independent components that represented EOG activity were removed from the ICA matrix prior to averaging (Gilley et al., 2006;Jung et al., 2000;Miller and Zhang, 2014). The ERP epoch was 600 ms and consisted of a 100 ms pre-stimulus baseline and a 500 ms recording window. ...
... Thus, the main advantage of taking measurements from the auditory cortex-identifying differential responses to different forms of meaningful speech-is lost using this approach. Although there are methods to better remove artifacts from the cortical signal, this is not trivial and it is still unclear how accurately the signal reflects actual neural activity (Friesen & Picton, 2010;Mc Laughlin, Lopez Valdes, Reilly, & Zeng, 2013;Miller & Zhang, 2014;Somers, Verschueren, & Francart, 2018). Moreover, the use of electrophysiological measures requires infants and young children to remain quite still, something difficult to achieve without sedation. ...
Article
Full-text available
Much of what is known about the course of auditory learning in following cochlear implantation is based on behavioral indicators that users are able to perceive sound. Both prelingually deafened children and postlingually deafened adults who receive cochlear implants display highly variable speech and language processing outcomes, although the basis for this is poorly understood. To date, measuring neural activity within the auditory cortex of implant recipients of all ages has been challenging, primarily because the use of traditional neuroimaging techniques is limited by the implant itself. Functional near‐infrared spectroscopy (fNIRS) is an imaging technology that works with implant users of all ages because it is non‐invasive, compatible with implant devices, and not subject to electrical artifacts. Thus, fNIRS can provide insight into processing factors that contribute to variations in spoken language outcomes in implant users, both children and adults. There are important considerations to be made when using fNIRS, particularly with children, to maximize the signal‐to‐noise ratio and to best identify and interpret cortical responses. This review considers these issues, recent data, and future directions for using fNIRS as a tool to understand spoken language processing in children and adults who hear through a cochlear implant.
... Numerous artefact reduction algorithms have been proposed in the literature such as beamformers [1], polynomial fitting [2], blanking [3,4] and independent component analysis (ICA) [5][6][7] among others. Each of these approaches has its own limitations: ICA is subjective, timeconsuming and computationally expensive; "blanking", which is a popular method for recording the auditory steadystate response in CI users, requires stimulation of a single electrode via a research interface, high sampling rates, low to intermediate stimulation rates (≤500 pulses per second or monopolar stimulation mode) and only reliably reduces the CI artefact at contralateral electrodes with respect to the implanted ear [8]; polynomial fitting requires a sampling rate which is sufficient to resolve the individual pulses, similar to the blanking method, and relies on flat envelopes of the acoustic signal to constrain the fit of the polynomial to the artefact and not to the neural response [2]. ...
... ICA has been successfully used in CI research to remove the CI artefact from acoustically evoked ALRs (e.g. Viola et al. 2011;Bakhos et al. 2012;Miller and Zhang 2014;Sandmann et al. 2015) and from electrically evoked cortical (40 Hz) ASSRs (e.g. Deprez et al. 2018). ...
Article
Full-text available
Objective: The aim of this study was to assess the feasibility of recording speech-ABRs from cochlear implant (CI) recipients, and to remove the artefact using a clinically applicable single-channel approach. Design: Speech-ABRs were recorded to a 40 ms [da] presented via loudspeaker using a two-channel electrode montage. Additionally, artefacts were recorded using an artificial-head incorporating a MED-EL CI with stimulation parameters as similar as possible to those of three MED-EL participants. A single-channel artefact removal technique was applied to all responses. Study sample: A total of 12 adult CI recipients (6 Cochlear Nucleus and 6 MED-EL CIs). Results: Responses differed according to the CI type, artefact removal resulted in responses containing speech-ARB characteristics in two MED-EL CI participants; however, it was not possible to verify whether these were true responses or were modulated by artefacts, and artefact removal was successful from the artificial-head recordings. Conclusions: This is the first study that attempted to record speech-ABRs from CI recipients. Results suggest that there is a potential for application of a single-channel approach to artefact removal. However, a more robust and adaptive approach to artefact removal that includes a method to verify true responses is needed.
... During auditory stimulation, CIs introduce electrical artifacts into EEG recordings. In order to suppress the CI artifacts, ICA methods have been widely used [28][29][30] . Here, we used a similar algorithm, second order blind identification (SOBI) implemented through the EEGLAB toolbox 31 , for minimization of the CI artifact. ...
Article
Full-text available
Hearing impairment disrupts processes of selective attention that help listeners attend to one sound source over competing sounds in the environment. Hearing prostheses (hearing aids and cochlear implants, CIs), do not fully remedy these issues. In normal hearing, mechanisms of selective attention arise through the facilitation and suppression of neural activity that represents sound sources. However, it is unclear how hearing impairment affects these neural processes, which is key to understanding why listening difficulty remains. Here, severely-impaired listeners treated with a CI, and age-matched normal-hearing controls, attended to one of two identical but spatially separated talkers while multichannel EEG was recorded. Whereas neural representations of attended and ignored speech were differentiated at early (~ 150 ms) cortical processing stages in controls, differentiation of talker representations only occurred later (~250 ms) in CI users. CI users, but not controls, also showed evidence for spatial suppression of the ignored talker through lateralized alpha (7–14 Hz) oscillations. However, CI users’ perceptual performance was only predicted by early-stage talker differentiation. We conclude that multi-talker listening difficulty remains for impaired listeners due to deficits in early-stage separation of cortical speech representations, despite neural evidence that they use spatial information to guide selective attention.
... Independent components having spatial and temporal characteristics of EOG activity were identified and removed from the ICA matrix prior to averaging (Gilley et al. 2006;Jung et al. 2000;Miller and Zhang 2014a). The ERP epoch was 700 ms in total and consisted of a 100 ms pre-stimulus baseline and a 600 ms recording window. ...
Article
Full-text available
Background: Cortical auditory event-related potentials are a potentially useful clinical tool to objectively assess speech outcomes with rehabilitative devices. Whether hearing aids reliably encode the spectrotemporal characteristics of fricative stimuli in different phonological contexts and whether these differences result in distinct neural responses with and without hearing aid amplification remain unclear. Purpose: To determine whether the neural coding of the voiceless fricatives /s/ and /ʃ/ in the syllable-final context reliably differed without hearing aid amplification and whether hearing aid amplification altered neural coding of the fricative contrast. Research Design: A repeated-measures, within subject design was used to compare the neural coding of a fricative contrast with and without hearing aid amplification. Study Sample: Ten adult listeners with normal hearing participated in the study. Data Collection and Analysis: Cortical auditory event-related potentials were elicited to an /ɑs/–/ɑʃ/ vowel-fricative contrast in unaided and aided listening conditions. Neural responses to the speech contrast were recorded at 64-electrode sites. Peak latencies and amplitudes of the cortical response waveforms to the fricatives were analyzed using repeated-measures analysis of variance. Results: The P2' component of the acoustic change complex significantly differed from the syllable-final fricative contrast with and without hearing aid amplification. Hearing aid amplification differentially altered the neural coding of the contrast across frontal, temporal, and parietal electrode regions. Conclusions: Hearing aid amplification altered the neural coding of syllable-final fricatives. However, the contrast remained acoustically distinct in the aided and unaided conditions, and cortical responses to the fricative significantly differed with and without the hearing aid.
... CI artifact suppression. ICA methods have been widely used to suppress the CI artifact and EEG recordings [44][45][46], and here we used the second-order blind identification (SOBI) algorithm borrowed from EEGLAB functions [47] for CI artifact suppression. Our group has successfully used this procedure to attenuate artifacts in prior studies on CI users [48]. ...
Article
Full-text available
Listening to speech in noise is effortful for individuals with hearing loss, even if they have received a hearing prosthesis such as a hearing aid or cochlear implant (CI). At present, little is known about the neural functions that support listening effort. One form of neural activity that has been suggested to reflect listening effort is the power of 8-12 Hz (alpha) oscillations measured by electroencephalography (EEG). Alpha power in two cortical regions has been associated with effortful listening-left inferior frontal gyrus (IFG), and parietal cortex-but these relationships have not been examined in the same listeners. Further, there are few studies available investigating neural correlates of effort in the individuals with cochlear implants. Here we tested 16 CI users in a novel effort-focused speech-in-noise listening paradigm, and confirm a relationship between alpha power and self-reported effort ratings in parietal regions, but not left IFG. The parietal relationship was not linear but quadratic, with alpha power comparatively lower when effort ratings were at the top and bottom of the effort scale, and higher when effort ratings were in the middle of the scale. Results are discussed in terms of cognitive systems that are engaged in difficult listening situations, and the implication for clinical translation.
... For each ERP time-course, we estimated the noise level as the averaged standard deviation in a [−100, 0] ms window, and then averaged the resulting noise values across all channels. In line with previous auditory ERP studies, the signal level was quantified as the mean absolute intensity in a [80, 120] ms window for the ERP time-course at channel Cz (Oray et al 2002, Miller andZhang 2014). We also assessed the topographies of the auditory N1 response, for latencies ranging between 80 and 120 ms. ...
Article
Full-text available
Objective. Electroencephalography (EEG) is a widely used technique to address research questions about brain functioning, from controlled laboratorial conditions to naturalistic environments. However, EEG data are affected by biological (e.g., ocular, myogenic) and non-biological (e.g., movement-related) artifacts, which -depending on their extent- may limit the interpretability of the study results. Blind source separation (BSS) approaches have demonstrated to be particularly promising for attenuation of artifacts in high-density EEG (hdEEG) data. Previous EEG artifact removal studies suggested that it may not be optimal to use the same BSS method for different kinds of artifacts. Approach. In this study, we developed a novel multi-step BSS approach to optimize the attenuation of ocular, movement-related and myogenic artifacts from hdEEG data. For validation purposes, we used hdEEG data collected in a group of healthy participants in standing, slow-walking and fast-walking conditions. During part of the experiment, a series of tone bursts were used to evoke auditory responses. We quantified event-related potentials (ERPs) using hdEEG signals collected during auditory stimulation, as well as event-related desynchronization (ERD) by contrasting hdEEG signals collected in walking and standing conditions, without auditory stimulation. We compared the results obtained in terms of auditory ERP and motor-related ERD using the proposed multi-step BSS approach, with respect to two classically used single-step BSS approaches. Main results. The use of our approach yielded the lowest residual noise in the hdEEG data, and permitted to retrieve stronger and more reliable modulations of neural activity than alternative solutions. Overall, our study confirmed that the performance of BSS-based artifact removal can be improved by using specific BSS methods and parameters for different kinds of artifacts. Significance. Our technological solution supports a wider use of hdEEG-based source imaging in movement and rehabilitation studies, and contribute to further development of mobile brain/body imaging applications.
... In the presence of auditory stimuli, the electrical stimulation and radio-frequency signals of CIs 251 impart electrical stimulation artifacts into the EEG recording (Wagner et al., 2018). While ICA has 252 been previously used to identify CI artifacts in EEG recordings involving auditory evoked potentials 253 (Gilley et al., 2006;Miller and Zhang, 2014), the nature of CI artifacts makes their reduction more 254 challenging, especially for continuous tasks. In general, ICA separates statistically independent 255 components from the mixed signal. ...
Preprint
Full-text available
There is a weak relationship between clinical and self-reported speech perception outcomes in cochlear implant (CI) listeners. Such poor correspondence may be due to differences in clinical and "real-world" listening environments and stimuli. Speech sounds in the real world are often accompanied by visual cues, background environmental noise and is generally in the context of a connected conversation. The aims of this study were to determine if brain responses to naturalistic speech could index speech perception and listening demand in CI users. Accordingly, we recorded high density EEG while CI users listened/watched a naturalistic stimulus (i.e., the television show, "The Office"). We used continuous EEG to quantify "speech neural tracking" (i.e., TRFs, temporal response functions) to the television show audio track and additionally 8-12 Hz (alpha) brain rhythms commonly related to listening effort. Background noise at three different signal-to-noise ratios (SNRs), +5, +10, and +15 dB were presented to vary the difficulty of following the television show mimicking a natural noisy environment. The task included an additional condition of audio-only (no video). After each condition, participants subjectively rated listening demand and the degree of words and conversations they felt they could understand. Fifteen CI users reported progressively higher degrees of listening demand and less words and conversation with increasing background noise. Listening demand and conversation understanding in the audio-only condition was comparable to that of the highest noise condition (+5 dB). The addition of the background noise reduced the degree of speech neural tracking. Mixed effect modeling showed that listening demand and conversation understanding were correlated to cortical speech tracking such that high demand and low conversation understanding lower associated with lower amplitude TRFs. In the high noise condition, greater listening demand was negatively correlated to parietal alpha power such that higher demand was related to lower alpha power. No significant correlations were observed between TRF/alpha and clinical speech perception scores. These results are similar to previous findings showing little relationship between speech perception and quality of life in CI users. However, the physiological responses to complex natural speech may anticipate aspects of quality-of-life measures such as self-perceived listening demand.
... The continuous EEG data were segmented into 2-s epochs. When removing artifacts, we treated subjects in the NH and CI groups differently; this was because for children with CIs, the EEG could be contaminated by electrical device-related artifacts (69). First, we visually inspected the epochs and removed those containing artifacts such as head or muscle movements, electrode cable movements, and rare jaw clenching. ...
Article
Full-text available
There are individual differences in rehabilitation after cochlear implantation that can be explained by brain plasticity. However, from the perspective of brain networks, the effect of implantation age on brain plasticity is unclear. The present study investigated electroencephalography functional networks in the resting state, including eyes-closed and eyes-open conditions, in 31 children with early cochlear implantation, 24 children with late cochlear implantation, and 29 children with normal hearing. Resting-state functional connectivity was measured with phase lag index, and we investigated the connectivity between the sensory regions for each frequency band. Network topology was examined using minimum spanning tree to obtain the network backbone characteristics. The results showed stronger connectivity between auditory and visual regions but reduced global network efficiency in children with late cochlear implantation in the theta and alpha bands. Significant correlations were observed between functional backbone characteristics and speech perception scores in children with cochlear implantation. Collectively, these results reveal an important effect of implantation age on the extent of brain plasticity from a network perspective and indicate that characteristics of the brain network can reflect the extent of rehabilitation of children with cochlear implantation.
... The CI stimulation artifacts could cause the erroneous detection of neural responses and could distort response properties. The removal of this unintentionally recorded stimulus artifact from the mixture has been proven to be a difficult task (Bahmer et al., 2008;Brown et al., 1994Brown et al., , 2000Hay-McCutcheon et al., 2002;Miller & Zhang, 2014;Undurraga et al., 2013). ...
Article
Full-text available
In patients with bilateral cochlear implants (CIs), pairing matched interaural electrodes and stimulating them with the same frequency band is expected to facilitate binaural functions such as binaural fusion, localization, and spatial release from masking. Because clinical procedures typically do not include patient-specific interaural electrode pairing, it remains the case that each electrode is allocated to a generic frequency range, based simply on the electrode number. Two psychoacoustic techniques for determining interaurally paired electrodes have been demonstrated in several studies: interaural pitch comparison and interaural time difference (ITD) sensitivity. However, these two methods are rarely, if ever, compared directly. A third, more objective method is to assess the amplitude of the binaural interaction component (BIC) derived from electrically evoked auditory brainstem responses for different electrode pairings; a method has been demonstrated to be a potential candidate for bilateral CI users. Here, we tested all three measures in the same eight CI users. We found good correspondence between the electrode pair producing the largest BIC and the electrode pair producing the maximum ITD sensitivity. The correspondence between the pairs producing the largest BIC and the pitch-matched electrode pairs was considerably weaker, supporting the previously proposed hypothesis that whilst place pitch might adapt over time to accommodate mismatched inputs, sensitivity to ITDs does not adapt to the same degree.
Article
Objective: Electrically evoked auditory steady-state responses (EASSRs) are potentially useful for objective cochlear implant (CI) fitting and follow-up of the auditory maturation in infants and children with a CI. EASSRs are recorded in the electro-encephalogram (EEG) in response to electrical stimulation with continuous pulse trains, and are distorted by significant CI artifacts related to this electrical stimulation. The aim of this study is to evaluate a CI artifacts attenuation method based on independent component analysis (ICA) for three EASSR datasets. Approach: ICA has often been used to remove CI artifacts from the EEG to record transient auditory responses, such as cortical evoked auditory potentials. Independent components (ICs) corresponding to CI artifacts are then often manually identified. In this study, an ICA based CI artifacts attenuation method was developed and evaluated for EASSR measurements with varying CI artifacts and EASSR characteristics. Artifactual ICs were automatically identified based on their spectrum. Main results: For 40 Hz amplitude modulation (AM) stimulation at comfort level, in high SNR recordings, ICA succeeded in removing CI artifacts from all recording channels, without distorting the EASSR. For lower SNR recordings, with 40 Hz AM stimulation at lower levels, or 90 Hz AM stimulation, ICA either distorted the EASSR or could not remove all CI artifacts in most subjects, except for two of the seven subjects tested with low level 40 Hz AM stimulation. Noise levels were reduced after ICA was applied, and up to 29 ICs were rejected, suggesting poor ICA separation quality. Significance: We hypothesize that ICA is capable of separating CI artifacts and EASSR in case the contralateral hemisphere is EASSR dominated. For small EASSRs or large CI artifact amplitudes, ICA separation quality is insufficient to ensure complete CI artifacts attenuation without EASSR distortion.
Article
Objective: a) To examine the effects of sensorineural hearing loss on the discriminability of linguistic and non-linguistic stimuli at the cortical level, and b) to examine whether the cortical responses differ based on the chronological age at intervention, the degree of hearing loss, or the acoustic stimulation mode in children with severe and profound hearing loss. Methods: Mismatch negativity (MMN) responses were collected from 43 children with severe and profound bilateral sensorineural hearing loss, and 20 children with normal hearing (age: 3-6 years). In the non-verbal stimulation condition, pure tones with frequencies of 1 kHz and 1.1 kHz were used as the standard and the deviant respectively. In the verbal stimulation condition, the Chinese mandarin tokens/ba2/and/ba4/were used as the standard and the deviant respectively. Latency and amplitude of the MMN responses were collected and analyzed. Results: Overall, children with hearing loss showed longer latencies and lower amplitudes of the MMN responses to both non-verbal and verbal stimulations. The latency of the verbal/ba2/-/ba4/pair was longer than that of the nonverbal 1 kHz-1.1 kHz pair in both groups of children. Conclusions: Children with hearing loss, especially those who received intervention after 2 years of age, showed substantial weakness in the neural responses to lexical tones and pure tones. Thus, the chronological age when the children receive hearing intervention may have an impact on the effectiveness of discriminating between verbal and non-verbal signals.
Article
Cochlear implants (CI) are neural prostheses that can restore hearing in individuals with severe to profound hearing loss. Although CIs significantly improve quality of life, clinical outcomes are still highly variable. An important part of this variability is explained by the brain reorganization following cochlear implantation. Therefore, clinicians and researchers are seeking objective measurements to investigate post-implantation brain plasticity. Electroencephalography (EEG) is a promising technique because it is objective, non-invasive, and implant-compatible, but is nonetheless susceptible to massive artifacts generated by the prosthesis's electrical activity. CI artifacts can blur and distort brain responses; thus, it is crucial to develop reliable techniques to remove them from EEG recordings. Despite numerous artifact removal techniques used in previous studies, there is a paucity of documentation and consensus on the optimal EEG procedures to reduce these artifacts. Herein, and through a comprehensive review process, we provide a guideline for designing an EEG-CI experiment minimizing the effect of the artifact. We provide some technical guidance for recording an accurate neural response from CI users and discuss the current challenges in detecting and removing CI-induced artifacts from a recorded signal. The aim of this paper is also to provide recommendations to better appraise and report EEG-CI findings.
Article
Full-text available
A better understanding of melodic pitch perception in cochlear implants (CIs) may guide signal processing and/or rehabilitation techniques to improve music perception and appreciation in CI patients. In this study, the mismatch negativity (MMN) in response to infrequent changes in 5-tone pitch contours was obtained in CI users and normal-hearing (NH) listeners. Melodic contour identification (MCI) was also measured. Results showed that MCI performance was poorer in CI than in NH subjects; the MMNs were missing in all CI subjects for the 1-semitone contours. The MMNs with the 5-semitone contours were observed in a smaller proportion of CI than NH subjects. Results suggest that encoding of pitch contour changes in CI users appears to be degraded, most likely due to the limited pitch cues provided by the CI and deafness-related compromise of brain substrates. © 2013 S. Karger AG, Basel.
Article
Full-text available
We describe a set of complementary EEG data collection and processing tools recently developed at the Swartz Center for Computational Neuroscience (SCCN) that connect to and extend the EEGLAB software environment, a freely available and readily extensible processing environment running under Matlab. The new tools include (1) a new and flexible EEGLAB STUDY design facility for framing and performing statistical analyses on data from multiple subjects; (2) a neuroelectromagnetic forward head modeling toolbox (NFT) for building realistic electrical head models from available data; (3) a source information flow toolbox (SIFT) for modeling ongoing or event-related effective connectivity between cortical areas; (4) a BCILAB toolbox for building online brain-computer interface (BCI) models from available data, and (5) an experimental real-time interactive control and analysis (ERICA) environment for real-time production and coordination of interactive, multimodal experiments.
Article
Full-text available
Speech scientists have long proposed that formant exaggeration in infant-directed speech plays an important role in language acquisition. This event-related potential (ERP) study investigated neural coding of formant-exaggerated speech in 6-12-month-old infants. Two synthetic /i/ vowels were presented in alternating blocks to test the effects of formant exaggeration. ERP waveform analysis showed significantly enhanced N250 for formant exaggeration, which was more prominent in the right hemisphere than the left. Time-frequency analysis indicated increased neural synchronization for processing formant-exaggerated speech in the delta band at frontal-central-parietal electrode sites as well as in the theta band at frontal-central sites. Minimum norm estimates further revealed a bilateral temporal-parietal-frontal neural network in the infant brain sensitive to formant exaggeration. Collectively, these results provide the first evidence that formant expansion in infant-directed speech enhances neural activities for phonetic encoding and language learning.
Article
Full-text available
When cortical auditory evoked potentials (CAEPs) are recorded in individuals with a cochlear implant (CI), electrical artifact can make the CAEP difficult or impossible to measure. Since increasing the interstimulus interval (ISI) increases the amplitude of physiological responses without changing the artifact, subtracting CAEPs recorded with a short ISI from those recorded with a longer ISI should show the physiological response without any artifact. In the first experiment, N1-P2 responses were recorded using a speech syllable and tone, paired with ISIs that changed randomly between 0.5 and 4s. In the second experiment, the same stimuli, at ISIs of either 500 or 3000ms, were presented in blocks that were homogeneous or random with respect to the ISI or stimulus. In the third experiment, N1-P2 responses were recorded using pulse trains with 500 and 3000ms ISIs in 4 CI listeners. The results demonstrated: (1) N1-P2 response amplitudes generally increased with increasing ISI. (2) Difference waveforms were largest for the homogeneous and random-stimulus blocks than for the random-ISI block. (3) The subtraction technique almost completely eliminated the electrical artifact in individuals with cochlear implants. Therefore, the subtraction technique is a feasible method of removing from the N1-P2 response the electrical artifact generated by the cochlear implant.
Article
Full-text available
The acoustic change complex (ACC) is a scalp-recorded negative-positive voltage swing elicited by a change during an otherwise steady-state sound. The ACC was obtained from eight adults in response to changes of amplitude and/or spectral envelope at the temporal center of a three-formant synthetic vowel lasting 800 ms. In the absence of spectral change, the group mean waveforms showed a clear ACC to amplitude increments of 2 dB or more and decrements of 3 dB or more. In the presence of a change of second formant frequency (from perceived /u/ to perceived /i/), amplitude increments increased the magnitude of the ACC but amplitude decrements had little or no effect. The fact that the just detectable amplitude change is close to the psychoacoustic limits of the auditory system augurs well for the clinical application of the ACC. The failure to find a condition under which the spectrally elicited ACC is diminished by a small change of amplitude supports the conclusion that the observed ACC to a change of spectral envelope reflects some aspect of cortical frequency coding. Taken together, these findings support the potential value of the ACC as an objective index of auditory discrimination capacity.
Article
Full-text available
To determine whether the N1-P2 complex reflects training-induced changes in neural activity associated with improved voice-onset-time (VOT) perception. Auditory cortical evoked potentials N1 and P2 were obtained from 10 normal-hearing young adults in response to two synthetic speech variants of the syllable /ba/. Using a repeated measures design, subjects were tested before and after training both behaviorally and neurophysiologically to determine whether there were training-related changes. In between pre- and post-testing sessions, subjects were trained to distinguish the -20 and -10 msec VOT /ba/ syllables as being different from each other. Two stimulus presentation rates were used during electrophysiologic testing (390 msec and 910 msec interstimulus interval). Before training, subjects perceived both the -20 msec and -10 msec VOT stimuli as /ba/. Through training, subjects learned to identify the -20 msec VOT stimulus as "mba" and -10 msec VOT stimulus as "ba." As subjects learned to correctly identify the difference between the -20 msec and -10 msec VOT syllabi, an increase in N1-P2 peak-to-peak amplitude was observed. The effects of training were most obvious at the slower stimulus presentation rate. As perception improved, N1-P2 amplitude increased. These changes in waveform morphology are thought to reflect increases in neural synchrony as well as strengthened neural connections associated with improved speech perception. These findings suggest that the N1-P2 complex may have clinical applications as an objective physiologic correlate of speech-sound representation associated with speech-sound training.
Article
Full-text available
The goal of this study was to determine whether there is a sensitive period during early development when a cochlear implantation can occur into a minimally degenerate and/or highly plastic central auditory system. Our measure of central auditory deprivation was latency of the P1 auditory evoked potential, whose generators include auditory thalamocortical areas. Auditory evoked potentials were recorded in 18 congenitally deaf children who were fitted with cochlear implants by 3.5 years of age. The P1 latencies of the children with implants were compared with the P1 latencies of their age-matched peers with normal hearing. There was no significant difference between the P1 latencies of the children with implants and the children with normal hearing. The present results suggest that early implantation occurs in a central auditory system that is minimally degenerate and/or highly plastic. Studies are ongoing to assess the consequences to the developing central auditory system of initiating electrical stimulation at later ages.
Article
Full-text available
We have developed a toolbox and graphic user interface, EEGLAB, running under the crossplatform MATLAB environment (The Mathworks, Inc.) for processing collections of single-trial and/or averaged EEG data of any number of channels. Available functions include EEG data, channel and event information importing, data visualization (scrolling, scalp map and dipole model plotting, plus multi-trial ERP-image plots), preprocessing (including artifact rejection, filtering, epoch selection, and averaging), independent component analysis (ICA) and time/frequency decompositions including channel and component cross-coherence supported by bootstrap statistical methods based on data resampling. EEGLAB functions are organized into three layers. Top-layer functions allow users to interact with the data through the graphic interface without needing to use MATLAB syntax. Menu options allow users to tune the behavior of EEGLAB to available memory. Middle-layer functions allow users to customize data processing using command history and interactive 'pop' functions. Experienced MATLAB users can use EEGLAB data structures and stand-alone signal processing functions to write custom and/or batch analysis scripts. Extensive function help and tutorial information are included. A 'plug-in' facility allows easy incorporation of new EEG modules into the main menu. EEGLAB is freely available (http://www.sccn.ucsd.edu/eeglab/) under the GNU public license for noncommercial use and open source development, together with sample data, user tutorial and extensive documentation.
Article
Full-text available
This article provides a new, more comprehensive view of event-related brain dynamics founded on an information-based approach to modeling electroencephalographic (EEG) dynamics. Most EEG research focuses either on peaks 'evoked' in average event-related potentials (ERPs) or on changes 'induced' in the EEG power spectrum by experimental events. Although these measures are nearly complementary, they do not fully model the event-related dynamics in the data, and cannot isolate the signals of the contributing cortical areas. We propose that many ERPs and other EEG features are better viewed as time/frequency perturbations of underlying field potential processes. The new approach combines independent component analysis (ICA), time/frequency analysis, and trial-by-trial visualization that measures EEG source dynamics without requiring an explicit head model.
Article
Full-text available
Human representational cortex may fundamentally alter its organization and (re)gain the capacity for auditory processing even when it is deprived of its input for more than two decades. Stimulus-evoked brain activity was recorded in post-lingual deaf patients after implantation of a cochlear prosthesis, which partly restored their hearing. During a 2 year follow-up study this activity revealed almost normal component configuration and was localized in the auditory cortex, demonstrating adequacy of the cochlear implant stimulation. Evoked brain activity increased over several months after the cochlear implant was turned on. This is taken as a measure of the temporal dynamics of plasticity of the human auditory system after implantation of cochlear prosthesis.
Article
Recent evidence suggests that late auditory evoked potentials (LAEP) provide a useful objective metric of performance in cochlear implant (CI) subjects. However, the CI produces a large electrical artifact that contaminates LAEP recordings and confounds their interpretation. Independent component analysis (ICA) has been used in combination with multi-channel recordings to effectively remove the artifact. The applicability of the ICA approach is limited when only single channel data are needed or available, as is often the case in both clinical and research settings. Here we developed a single-channel, high sample rate (125 kHz), and high bandwidth (0 - 100 kHz) acquisition system to reduce the CI stimulation artifact. We identified two different artifacts in the recording: 1) a high frequency artifact reflecting the stimulation pulse rate, and 2) a direct current (DC, or pedestal) artifact that showed a non-linear time varying relationship to pulse amplitude. This relationship was well described by a bivariate polynomial. The high frequency artifact was completely attenuated by a 35 Hz low-pass filter for all subjects (n=22). The DC artifact could be caused by an impedance mismatch. For 27% of subjects tested, no DC artifact was observed when electrode impedances were balanced to within 1 kΩ. For the remaining 73% of subjects, the pulse amplitude was used to estimate and then attenuate the DC artifact. Where measurements of pulse amplitude were not available (as with standard low sample rate systems), the DC artifact could be estimated from the stimulus envelope. The present artifact removal approach allows accurate measurement of LAEPs from CI subjects from single channel recordings, increasing their feasibility and utility as an accessible objective measure of CI function.
Article
Electrical artifacts caused by the cochlear implant (CI) contaminate electroencephalographic (EEG) recordings from implanted individuals and corrupt auditory evoked potentials (AEPs). Independent component analysis (ICA) is efficient in attenuating the electrical CI artifact and AEPs can be successfully reconstructed. However the manual selection of CI artifact related independent components (ICs) obtained with ICA is unsatisfactory, since it contains expert-choices and is time consuming. We developed a new procedure to evaluate temporal and topographical properties of ICs and semi-automatically select those components representing electrical CI artifact. The CI Artifact Correction (CIAC) algorithm was tested on EEG data from two different studies. The first consists of published datasets from 18 CI users listening to environmental sounds. Compared to the manual IC selection performed by an expert the sensitivity of CIAC was 91.7% and the specificity 92.3%. After CIAC-based attenuation of CI artifacts, a high correlation between age and N1-P2 peak-to-peak amplitude was observed in the AEPs, replicating previously reported findings and further confirming the algorithm's validity. In the second study AEPs in response to pure tone and white noise stimuli from 12 CI users that had also participated in the other study were evaluated. CI artifacts were attenuated based on the IC selection performed semi-automatically by CIAC and manually by one expert. Again, a correlation between N1 amplitude and age was found. Moreover, a high test-retest reliability for AEP N1 amplitudes and latencies suggested that CIAC-based attenuation reliably preserves plausible individual response characteristics. We conclude that CIAC enables the objective and efficient attenuation of the CI artifact in EEG recordings, as it provided a reasonable reconstruction of individual AEPs. The systematic pattern of individual differences in N1 amplitudes and latencies observed with different stimuli at different sessions, strongly suggests that CIAC can overcome the electrical artifact problem. Thus CIAC facilitates the use of cortical AEPs as an objective measurement of auditory rehabilitation.
Article
Auditory evoked potentials (AEPs) provide an objective measure of auditory cortical function, but AEPs from cochlear implant (CI) users are contaminated by an electrical artifact. Here, we investigated the effects of electrical artifact attenuation on AEP quality. The ability of independent component analysis (ICA) in attenuating the CI artifact while preserving the AEPs was evaluated. AEPs recovered from CI users were systematically correlated with age, demonstrating that individual differences were well preserved. CI users with high-quality AEPs were characterized by a significantly shorter duration of deafness. Finally, a simulation study revealed very high spatial correlations between original and recovered normal hearing AEPs (r>.95) that were previously contaminated with CI artifacts. The results confirm that after ICA, good quality AEPs can be recovered, facilitating the objective, noninvasive study of auditory cortex function in CI users.
Article
A better understanding of the neural correlates of large variability in cochlear implant (CI) patients' speech performance may allow us to find solutions to further improve CI benefits. The present study examined the mismatch negativity (MMN) and the adaptation of the late auditory evoked potential (LAEP) in 10 CI users. The speech syllable /da/ and 1-kHz tone burst were used to examine the LAEP adaptation. The amount of LAEP adaptation was calculated according to the averaged N1-P2 amplitude for the LAEPs evoked by the last 3 stimuli and the amplitude evoked by the first stimulus. For the MMN recordings, the standard stimulus (1-kHz tone) and the deviant stimulus (2-kHz tone) were presented in an oddball condition. Additionally, the deviants alone were presented in a control condition. The MMN was derived by subtracting the response to the deviants in the control condition from the oddball condition. Results showed that good CI performers displayed a more prominent LAEP adaptation than moderate-to-poor performers. Speech performance was significantly correlated to the amount of LAEP adaptation for the 1-kHz tone bursts. Good performers displayed large MMNs and moderate-to-poor performers had small or absent MMNs. The abnormal electrophysiological findings in moderate-to-poor performers suggest that long-term deafness may cause damage not only at the auditory cortical level, but also at the cognitive level.
Article
This study employed behavioral and electrophysiological measures to examine selective listening of concurrent auditory stimuli. Stimuli consisted of four compound sounds, each created by mixing a pure tone with filtered noise bands at a signal-to-noise ratio of +15 dB. The pure tones and filtered noise bands each contained two levels of pitch. Two separate conditions were created; the background stimuli varied randomly or were held constant. In separate blocks, participants were asked to judge the pitch of tones or the pitch of filtered noise in the compound stimuli. Behavioral data consistently showed lower sensitivity and longer response times for classification of filtered noise when compared with classification of tones. However, differential effects were observed in the peak components of auditory event-related potentials (ERPs). Relative to tone classification, the P1 and N1 amplitudes were enhanced during the more difficult noise classification task in both test conditions, but the peak latencies were shorter for P1 and longer for N1 during noise classification. Moreover, a significant interaction between condition and task was seen for the P2. The results suggest that the essential ERP components for the same compound auditory stimuli are modulated by listeners' focus on specific aspects of information in the stimuli.
Article
The aim of the present experiment was to assess the consequences of cochlear implantation at different ages on the development of the human central auditory system. Our measure of the maturity of central auditory pathways was the latency of the P1 cortical auditory evoked potential. Because P1 latencies vary as a function of chronological age, they can be used to infer the maturational status of auditory pathways in congenitally deafened children who regain hearing after being fit with a cochlear implant. We examined the development of P1 response latencies in 104 congenitally deaf children who had been fit with cochlear implants at ages ranging from 1.3 yr to 17.5 yr and three congenitally deaf adults. The independent variable was the duration of deafness before cochlear implantation. The dependent variable was the latency of the P1 cortical auditory evoked potential. A comparison of P1 latencies in implanted children with those of age-matched normal-hearing peers revealed that implanted children with the longest period of auditory deprivation before implantation-7 or more yr-had abnormal cortical response latencies to speech. Implanted children with the shortest period of auditory deprivation-approximately 3.5 yr or less-evidenced age-appropriate latency responses within 6 mo after the onset of electrical stimulation. Our data suggest that in the absence of normal stimulation there is a sensitive period of about 3.5 yr during which the human central auditory system remains maximally plastic. Plasticity remains in some, but not all children until approximately age 7. After age 7, plasticity is greatly reduced. These data may be relevant to the issue of when best to place a cochlear implant in a congenitally deaf child.
Article
As the need for objective measures with cochlear implant users increases, it is critical to understand how electrical potentials behave when stimulus parameters are systematically varied. The purpose of this study was to record and evaluate the effects of implanted electrode site and stimulus current level on latency, amplitude, and threshold measures of electrically evoked auditory potentials, representing brainstem and cortical levels of the auditory system. The electrical auditory brainstem response (EABR), electrical auditory middle latency response (EAMLR), and the electrical late auditory response (ELAR) were recorded from the same experimental subjects, 11 adult Clarion cochlear implant users. The Waves II, III, and V of the EABR, the Na-Pa complex of the EAMLR and the N1-P2 complex of the ELAR were investigated relative to electrode site (along the intra-cochlear electrode array) and stimulus current level. Evoked potential measures were examined for statistical significance using analysis of variance (ANOVA) for repeated measures. For the EABR, Wave V latency was significantly longer for the basal electrode (7) compared with the mid (4) and apical (1) electrodes. For the EAMLR and ELAR, there were no significant differences in latency by electrode site. For all subjects and each of the evoked potentials, the apical electrodes tended to have the largest amplitude and the basal electrodes the smallest amplitude, although amplitude differences did not reach statistical significance. In general, decreases in stimulus current level resulted in statistically significant decreases in the amplitude of Wave V, Na-Pa and N1-P2. The evoked potential thresholds for Wave V, Na-Pa, and N1-P2 were significantly higher for the basal Electrode 7 than for Electrodes 4 and 1. Electrophysiologic responses of Waves II, III, and V of the EABR, Na-Pa of the EAMLR, and N1-P2 of the ELAR were characterized as functions of current level and electrode site. Data from this study may serve as a normative reference for expected latency, amplitude and threshold values for the recording of electrically evoked auditory brainstem and cortical potentials. Responses recorded from cochlear implant users show many similar patterns, yet important distinctions, compared with auditory potentials elicited with acoustic signals.
Article
This article provides a selective review on the perspectives of the clinical research and application of the mismatch negativity (MMN), a component of the auditory event-related potential generated by the brain's automatic response to any discriminable change in auditory stimulation. The MMN (and its magnetic equivalent MMNm) currently provide the only objective measure of auditory discrimination and sensory memory. It can be registered in the absence of attention and with no task requirements, which makes it particularly suitable for studying different clinical populations and infants.
Article
To compare two methods of minimizing cochlear implant artifact in cortical auditory evoked potential (CAEP) recordings. Two experiments were conducted. In the first, we assessed the use of independent component analysis (ICA) as a pre-processing filter. In the second, we explored the use of an optimized differential reference (ODR) for minimizing artifacts. Both ICA and the ODR can minimize the artifact and allow measurement of CAEP responses. When using a large number of recording electrodes ICA can be used to minimize the implant artifact. When using a single electrode montage an optimized differential reference is adequate to minimize the artifact. The use of an optimized differential reference could allow cortical evoked potentials to be used in routine clinical assessment of auditory pathway development in children and adults fit with cochlear implants.
Article
Little is known about how the auditory cortex adapts to artificial input as provided by a cochlear implant (CI). We report the case of a 71-year-old profoundly deaf man, who has successfully used a unilateral CI for 4 years. Independent component analysis (ICA) of 61-channel EEG recordings could separate CI-related artifacts from auditory-evoked potentials (AEPs), even though it was the perfectly time-locked CI stimulation that caused the AEPs. AEP dipole source localization revealed contralaterally larger amplitudes in the P1-N1 range, similar to normal hearing individuals. In contrast to normal hearing individuals, the man with the CI showed a 20-ms shorter N1 latency ipsilaterally. We conclude that ICA allows the detailed study of AEPs in CI users.
Article
Speech-evoked auditory event-related potentials (ERPs) provide insight into the neural mechanisms underlying speech processing. For this reason, ERPs are of great value to hearing scientists and audiologists. This article will provide an overview of ERPs frequently used to examine the processing of speech and other sound stimuli. These ERPs include the P1-N1-P2 complex, acoustic change complex, mismatch negativity, and P3 responses. In addition, we focus on the application of these speech-evoked potentials for the assessment of (1) the effects of hearing loss on the neural encoding of speech allowing for behavioral detection and discrimination; (2) improvements in the neural processing of speech with amplification (hearing aids, cochlear implants); and (3) the impact of auditory training on the neural processing of speech. Studies in these three areas are reviewed and implications for audiologists are discussed.
  • R Naatanen
  • Mismatch Negativity
R. Naatanen, Mismatch negativity: clinical research and possible applications, Int. J. Psychophysiol. 48 (2003) 179–188.
  • A Delorme
  • T Mullen
  • C Kothe
  • Z Akalin Acar
  • N Bigdely-Shamlo
  • A Vankov
  • S Makeig
  • Eeglab
  • Sift
  • Nft
  • Erica Bcilab
A. Delorme, T. Mullen, C. Kothe, Z. Akalin Acar, N. Bigdely-Shamlo, A. Vankov, S. Makeig, EEGLAB, SIFT, NFT, BCILAB, and ERICA: new tools for advanced EEG processing, Comput. Intell. Neurosci. 2011 (2011) 130714.
  • F Zhang
  • T Hammer
  • H L Banks
  • C Benson
  • J Xiang
  • Q J Fu
F. Zhang, T. Hammer, H.L. Banks, C. Benson, J. Xiang, Q.J. Fu, Mismatch negativity and adaptation measures of the late auditory evoked potential in cochlear implant users, Hear. Res. 275 (2011) 17–29.
  • A Delorme
  • T Mullen
  • C Kothe
  • Z Acar
  • N Bigdely-Shamlo
  • A Vankov
  • S Makeig
  • Eeglab
  • Sift
  • Bcilab Nft
A. Delorme, T. Mullen, C. Kothe, Z. Akalin Acar, N. Bigdely-Shamlo, A. Vankov, S. Makeig, EEGLAB, SIFT, NFT, BCILAB, and ERICA: new tools for advanced EEG processing, Comput. Intell. Neurosci. 2011 (2011) 130714.