ArticleLiterature Review

The emergence of machine learning in auditory neural impairment: A systematic review

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Hearing loss is a common neurodegenerative disease that can start at any stage of life. Misalignment of the auditory neural impairment may impose challenges in processing incoming auditory stimulus that can be measured using electroencephalography (EEG). The electrophysiological behaviour response emanated from EEG auditory evoked potential (AEP) requires highly trained professionals for analysis and interpretation. Reliable automated methods using techniques of machine learning would assist the auditory assessment process for informed treatment and practice. It is thus highly required to develop models that are more efficient and precise by considering the characteristics of brain signals. This study aims to provide a comprehensive review of several state-of-the-art techniques of machine learning that adopt EEG evoked response for the auditory assessment within the last 13 years. Out of 161 initially screened articles, 11 were retained for synthesis. The outcome of the review presented that the Support Vector Machine (SVM) classifier outperformed with over 80% accuracy metric and was recognized as the best suited model within the field of auditory research. This paper discussed the comprehensive iterative properties of the proposed computed algorithms and the feasible future direction in hearing impaired rehabilitation.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The participants' heads were fixed. The T1-weighted 3D-FSPGR sequence parameters were as follows: repetition time (TR) of 6 ...
... The superiority of the SVM method may be due to its ability to solve small-sample, highdimensional problems 36 . Abu Bakar et al. 6 conducted a study on the importance of machine learning in hearing impairment and analyzed machine learning technology related to hearing assessment over the past 13 years. The SVM classifier outperformed other methods, with an accuracy of over 80%, and was recognized as the bestsuited model within the field of auditory research. ...
Article
Full-text available
Noise-induced hearing loss (NIHL) is a common occupational condition. The aim of this study was to develop a classification model for NIHL on the basis of both functional magnetic resonance imaging (fMRI) and structural magnetic resonance imaging (sMRI) by applying machine learning methods. fMRI indices such as the amplitude of low-frequency fluctuation (ALFF), fractional amplitude of low-frequency fluctuation (fALFF), regional homogeneity (ReHo), degree of centrality (DC), and sMRI indices such as gray matter volume (GMV), white matter volume (WMV), and cortical thickness were extracted from each brain region. The least absolute shrinkage and selection operator was used to reduce and select the optimal features. The support vector machine (SVM), random forest (RF) and logistic regression (LR) algorithms, were used to establish the classification model for NIHL. Finally, the SVM model based on combined fMRI indices, achieved the best performance, with area under the receiver operating characteristic curve of 0.97 and an accuracy of 95%. The SVM classification model that integrates fMRI indicators has the greatest potential for identifying NIHL patients and healthy people, revealing the complementary role of fMRI indicators in classification and indicating that it is necessary to include multiple indicators of the brain when establishing a classification model.
Article
Full-text available
Introduction: The increasing availability of data and computing power has made machine learning (ML) a viable approach to faster, more efficient healthcare delivery. Methods: A systematic literature review (SLR) of published SLRs evaluating ML applications in healthcare settings published between1 January 2010 and 27 March 2023 was conducted. Results: In total 220 SLRs covering 10,462 ML algorithms were reviewed. The main application of AI in medicine related to the clinical prediction and disease prognosis in oncology and neurology with the use of imaging data. Accuracy, specificity, and sensitivity were provided in 56%, 28%, and 25% SLRs respectively. Internal and external validation was reported in 53% and less than 1% of the cases respectively. The most common modeling approach was neural networks (2,454 ML algorithms), followed by support vector machine and random forest/decision trees (1,578 and 1,522 ML algorithms, respectively). Expert opinion: The review indicated considerable reporting gaps in terms of the ML's performance, both internal and external validation. Greater accessibility to healthcare data for developers can ensure the faster adoption of ML algorithms into clinical practice.
Preprint
Full-text available
The increasing availability of data and computing power has made machine learning (ML) a viable approach to faster, more efficient healthcare delivery. To exploit the potential of data-driven technologies, further integration of artificial intelligence (AI) into healthcare systems is warranted. A systematic literature review (SLR) of published SLRs evaluated evidence of ML applications in healthcare settings published in PubMed, IEEE Xplore, Scopus, Web of Science, EBSCO, and the Cochrane Library up to March 2023. Studies were classified based on the disease area and the type of ML algorithm used. In total, 220 SLRs covering 10,462 ML algorithms were identified, the majority of which aimed at solutions towards clinical prediction, categorisation, and disease prognosis in oncology and neurology primarily using imaging data. Accuracy, specificity, and sensitivity were 56%, 28%, and 25%, respectively. Internal validation was reported in 53% of the ML algorithms and external validation in below 1%. The most common modelling approach was neural networks (2,454 ML algorithms), followed by support vector machine and random forest/decision trees (1,578 and 1,522 ML algorithms, respectively). The review indicated that there is potential for greater adoption of AI in healthcare, with 10,462 ML algorithms identified compared to 523 approved by the Food and Drug Administration (FDA). However, the considerable reporting gaps call for more effort towards internal and external validation. Greater accessibility to healthcare data for developers can ensure the faster adoption of ML algorithms.
Article
Objective: Substance abuse causes damage to the brain structure and function. This research aim is to design an automated drug dependence detection system based on EEG signals in a Multidrug (MD) abuser. Methods: EEG signals were recorded from participants categorized into MD-dependents (n = 10) and Healthy Control (HC) (n = 12). The Recurrence Plot investigates the dynamic characteristics of the EEG signal. The entropy index (ENTR) measured from the Recurrence Quantification Analysis was considered the complexity index of the delta, theta, alpha, beta, gamma, and all-band EEG signals. Statistical analysis was performed by t-test. The support vector machine technique was used for the data classification. Results: The results show decreased ENTR indices in the delta, alpha, beta, gamma, and all-band EEG signal and increased theta band in MD abusers compared to the HC group. That indicated the reduction of complexity in the delta, alpha, beta, gamma, and all-band EEG signals in the MD group. Additionally, the SVM classifier distinguished the MD group from the HC group with 90% accuracy, 89.36% sensitivity, 90.7% specificity, and 89.8% F1 score. Conclusions and significance: The nonlinear analysis of brain data was used to build an automatic diagnostic aid system that could identify HC people apart from those who abuse MD.
Article
Full-text available
The field of signal processing using machine and deep learning algorithms has undergone significant growth in the last few years, with a wide scope of practical applications for electroencephalography (EEG). Transcutaneous electroacupuncture stimulation (TEAS) is a well-established variant of the traditional method of acupuncture that is also receiving increasing research attention. This paper presents the results of using deep learning algorithms on EEG data to investigate the effects on the brain of different frequencies of TEAS when applied to the hands in 66 participants, before, during and immediately after 20 min of stimulation. Wavelet packet decomposition (WPD) and a hybrid Convolutional Neural Network Long Short-Term Memory (CNN-LSTM) model were used to examine the central effects of this peripheral stimulation. The classification results were analysed using confusion matrices, with kappa as a metric. Contrary to expectation, the greatest differences in EEG from baseline occurred during TEAS at 80 pulses per second (pps) or in the ‘sham’ (160 pps, zero amplitude), while the smallest differences occurred during 2.5 or 10 pps stimulation (mean kappa 0.414). The mean and CV for kappa were considerably higher for the CNN-LSTM than for the Multilayer Perceptron Neural Network (MLP-NN) model. As far as we are aware, from the published literature, no prior artificial intelligence (AI) research appears to have been conducted into the effects on EEG of different frequencies of electroacupuncture-type stimulation (whether EA or TEAS). This ground-breaking study thus offers a significant contribution to the literature. However, as with all (unsupervised) DL methods, a particular challenge is that the results are not easy to interpret, due to the complexity of the algorithms and the lack of a clear understanding of the underlying mechanisms. There is therefore scope for further research that explores the effects of the frequency of TEAS on EEG using AI methods, with the most obvious place to start being a hybrid CNN-LSTM model. This would allow for better extraction of information to understand the central effects of peripheral stimulation.
Preprint
Full-text available
A review of over 4000+ articles published in 2021 related to artificial intelligence in healthcare.A BrainX Community exclusive, annual publication which has trends, specialist editorials and categorized references readily available to provide insights into related 2021 publications. Cite as: Mathur P, Mishra S, Awasthi R, Cywinski J, et al. (2022). Artificial Intelligence in Healthcare: 2021 Year in Review. DOI: 10.13140/RG.2.2.25350.24645/1
Conference Paper
Full-text available
In recent years, rapid advances in speech technology have been made possible by machine learning challenges such as CHiME, REVERB, Blizzard, and Hurricane. In the Clarity project, the machine learning approach is applied to the problem of hearing aid processing of speech-in-noise, where current technology in enhancing the speech signal for the hearing aid wearer is often ineffective. The scenario is a (simulated) cuboid-shaped living room in which there is a single listener, a single target speaker and a single interferer, which is either a competing talker or domestic noise. All sources are static, the target is always within ±30 degrees azimuth of the listener and at the same elevation, and the interferer is an omnidirectional point source at the same elevation. The target speech comes from an open source 40-speaker British English speech database collected for this purpose. This paper provides a baseline description of the round one Clarity challenges for both enhancement (CEC1) and prediction (CPC1). To the authors' knowledge, these are the first machine learning challenges to consider the problem of hearing aid speech signal processing.
Article
Full-text available
Speech perception in noisy environments depends on complex interactions between sensory and cognitive systems. In older adults, such interactions may be affected, especially in those individuals who have more severe age-related hearing loss. Using a data-driven approach, we assessed the temporal (when in time) and spatial (where in the brain) characteristics of cortical speech-evoked responses that distinguish older adults with or without mild hearing loss. We performed source analyses to estimate cortical surface signals from the EEG recordings during a phoneme discrimination task conducted under clear and noise-degraded conditions. We computed source-level ERPs (i.e., mean activation within each ROI) from each of the 68 ROIs of the Desikan-Killiany (DK) atlas, averaged over a randomly chosen 100 trials without replacement to form feature vectors. We adopted a multivariate feature selection method called stability selection and control to choose features that are consistent over a range of model parameters. We use parameter optimized support vector machine (SVM) as a classifiers to investigate the time course and brain regions that segregate groups and speech clarity. For clear speech perception, whole-brain data revealed a classification accuracy of 81.50% [area under the curve (AUC) 80.73%; F1-score 82.00%], distinguishing groups within ∼60 ms after speech onset (i.e., as early as the P1 wave). We observed lower accuracy of 78.12% [AUC 77.64%; F1-score 78.00%] and delayed classification performance when speech was embedded in noise, with group segregation at 80 ms. Separate analysis using left (LH) and right hemisphere (RH) regions showed that LH speech activity was better at distinguishing hearing groups than activity measured in the RH. Moreover, stability selection analysis identified 12 brain regions (among 1428 total spatiotemporal features from 68 regions) where source activity segregated groups with >80% accuracy (clear speech); whereas 16 regions were critical for noise-degraded speech to achieve a comparable level of group segregation (78.7% accuracy). Our results identify critical time-courses and brain regions that distinguish mild hearing loss from normal hearing in older adults and confirm a larger number of active areas, particularly in RH, when processing noise-degraded speech information.
Article
Full-text available
Objectives.: Prognosticating idiopathic sudden sensorineural hearing loss (ISSNHL) is an important challenge. In our study, a dataset was split into training and test sets and cross-validation was implemented on the training set, thereby determining the hyperparameters for machine learning models with high test accuracy and low bias. The effectiveness of the following five machine learning models for predicting the hearing prognosis in patients with ISSNHL after 1 month of treatment was assessed: adaptive boosting, K-nearest neighbor, multilayer perceptron, random forest (RF), and support vector machine (SVM). Methods.: The medical records of 523 patients with ISSNHL admitted to Korea University Ansan Hospital between January 2010 and October 2017 were retrospectively reviewed. In this study, we analyzed data from 227 patients (recovery, 106; no recovery, 121) after excluding those with missing data. To determine risk factors, statistical hypothesis tests (e.g., the two-sample t-test for continuous variables and the chi-square test for categorical variables) were conducted to compare patients who did or did not recover. Variables were selected using an RF model depending on two criteria (mean decreases in the Gini index and accuracy). Results.: The SVM model using selected predictors achieved both the highest accuracy (75.36%) and the highest F-score (0.74) on the test set. The RF model with selected variables demonstrated the second-highest accuracy (73.91%) and F-score (0.74). The RF model with the original variables showed the same accuracy (73.91%) as that of the RF model with selected variables, but a lower F-score (0.73). All the tested models, except RF, demonstrated better performance after variable selection based on RF. Conclusion.: The SVM model with selected predictors was the best-performing of the tested prediction models. The RF model with selected predictors was the second-best model. Therefore, machine learning models can be used to predict hearing recovery in patients with ISSNHL.
Article
Full-text available
This study presents a novel approach to assess the perception of auditory Absolute threshold (ATTh) in healthy individuals exposed to noise and solvents in their occupational environment using machine learning approaches. 396 subjects with no known history of auditory pathology were chosen from three groups, namely, employees from Chemical Industries (CI), Fabrication Industries (FI), and professional Basketball Players (BP), with each category having 132 subjects. Absolute Threshold Test (ATT) was developed using MATLAB and the experiment was conducted in a silent, noise-free environment. ATTh was obtained twice, during the commencement and conclusion of the employees’ workshift in CI and FI. For BP, ATTh was obtained before and after their basketball training sessions and was used as features for binary SVM classification approach, in which the RBF kernel-based technique was found to provide maximum accuracy as compared to linear and quadratic approach. For three-class classification, MLP neural network with Levenberg–Marquardt training function in the hidden layer and Mean Square Error function in the output layer was found to be optimal along with k-Nearest Neighbor (kNN) and Support Vector Machine (SVM) approach using Radial Basis Kernel Function (RBF), in which, an accuracy of 81.06% was observed in kNN approach and 92.4% using MLP neural network approach, whereas SVM yielded an accuracy of 93.94% in the classification of the subjects into CI, FI and BP, showing that the SVM outperformed kNN and MLP neural network for healthy subjects based on their occupational exposure/professional sports training. Such machine learning approaches could further be probed into, to improve the accuracy of classification. Also, such techniques can help in real-time classification of subjects based on their occupational exposure so as to predict and prevent plausible permanent hearing dysfunction due to occupational exposure as well as to aid in sports rehabilitation and training programs to assess the auditory perceptive abilities of the individuals.
Article
Full-text available
Purpose: Cortical auditory evoked potential (CAEP) is a useful objective test for diagnosing hearing loss and auditory disorders. Prior to its clinical applications among pediatric population, the possible influences of fundamental variables on CAEP should be studied. The aim of the present study was to determine the effects of age and type of stimulus on CAEP waveforms. Methods: Thirty-five healthy Malaysian children aged 4 to 12 years participated in this repeated-measures study. CAEP waveforms were recorded from each child using 1 kHz tone burst and speech syllable /ba/. Latencies and amplitudes of P1, N1 and P2 peaks were analyzed accordingly. Results: Significant negative correlations were found between age and speech-evoked CAEP latency for each peak (p < 0.05). On the other hand, no significant correlations were found between age and tone-evoked CAEP amplitudes and latencies (p > 0.05). The speech syllable /ba/ produced a higher mean P1 amplitude than the 1 kHz tone burst (p = 0.001). Conclusion: CAEP latencies recorded with the speech syllable became shorter with age. While both tone burst and speech stimuli were appropriate for recording CAEP, significantly bigger amplitudes were found for speech-evoked CAEP. The preliminary normative CAEP data provided in the present study can be beneficial for clinical and research applications involving Malaysian children.
Article
Full-text available
Sleep quality has a vital effect on good health and well-being throughout a life. Getting enough sleep at the right times can help protect mental health, physical health, quality of life, and safety. In this study, an electroencephalography (EEG)-based machine-learning approach is proposed to measure sleep quality. The advantages of this approach over standard Polysomnography (PSG) method are: 1) it measures sleep quality by recognizing three sleep categories rather than five sleep stages, thus higher accuracy can be expected; 2) three sleep categories are recognized by analyzing EEG signals only, so the user experience is improved because fewer sensors are attached to the body during sleep. Using quantitative features obtained from EEG signals, we developed a new automatic sleep-staging framework that consists of a multi-class support vector machine (SVM) classification based on a decision tree approach. We used polysomnographic data from PhysioBank database to train and evaluate and test the performance of the framework, where the sleep stages have been visually annotated. The results demonstrated that the proposed approach achieves high classification performance, which helps to measure sleep quality accurately. This framework can provide a robust and accurate sleep quality assessment that helps clinicians to determine the presence and severity of sleep disorders, and also evaluate the efficacy of treatments.
Article
Full-text available
Machine learning leverages statistical and computer science principles to develop algorithms capable of improving performance through interpretation of data rather than through explicit instructions. Alongside widespread use in image recognition, language processing, and data mining, machine learning techniques have received increasing attention in medical applications, ranging from automated imaging analysis to disease forecasting. This review examines the parallel progress made in epilepsy, highlighting applications in automated seizure detection from electroencephalography (EEG), video, and kinetic data, automated imaging analysis and pre‐surgical planning, prediction of medication response, and prediction of medical and surgical outcomes using a wide variety of data sources. A brief overview of commonly used machine learning approaches, as well as challenges in further application of machine learning techniques in epilepsy, is also presented. With increasing computational capabilities, availability of effective machine learning algorithms, and accumulation of larger datasets, clinicians and researchers will increasingly benefit from familiarity with these techniques and the significant progress already made in their application in epilepsy.
Article
Full-text available
A successful Hearing-Aid Fitting (HAF) is more than just selecting an appropriate Hearing Aid (HA) device for a patient with Hearing Loss (HL). The initial fitting is given by the prescription based on user’s hearing loss; however, it is often necessary for the audiologist to readjust some parameters to satisfy the user demands. Therefore, in this paper, we concentrated on a new application of Neural Network (NN) combined with a Transfer Learning (TL) strategy to develop a fitting algorithm with the prescription database for hearing loss and readjusted gain to minimize the gap between fitting satisfaction. As prior information, we generated the data set from two popular hearing-aid fitting software, then fed the training data to our proposed model, and verified the performance of the architecture. Pondering real life circumstances, where numerous fitting records may not always be accessible, we first investigated the number of minimum fitting records required for possible sufficient training. After that, we evaluated the performance of the proposed algorithm in two phases: (a) NN with refined hyper parameter showed enhanced performance in compare to state-of-the-art DNN approach, and (b) the TL approach boosted the performance of the NN algorithm in a broad way. Altogether, our model provides a pragmatic and promising tool for HAF.
Article
Full-text available
Older adults commonly report difficulty understanding speech, particularly in adverse listening environments. These communication difficulties may exist in the absence of peripheral hearing loss. Older adults, both with normal hearing and with hearing loss, demonstrate temporal processing deficits that affect speech perception. The purpose of the present study is to investigate aging, cognition, and neural processing factors that may lead to deficits on perceptual tasks that rely on phoneme identification based on a temporal cue – vowel duration. A better understanding of the neural and cognitive impairments underlying temporal processing deficits could lead to more focused aural rehabilitation for improved speech understanding for older adults. This investigation was conducted in younger (YNH) and older normal-hearing (ONH) participants who completed three measures of cognitive functioning known to decline with age: working memory, processing speed, and inhibitory control. To evaluate perceptual and neural processing of auditory temporal contrasts, identification functions for the contrasting word-pair WHEAT and WEED were obtained on a nine-step continuum of vowel duration, and frequency-following responses (FFRs) and cortical auditory-evoked potentials (CAEPs) were recorded to the two endpoints of the continuum. Multiple linear regression analyses were conducted to determine the cognitive, peripheral, and/or central mechanisms that may contribute to perceptual performance. YNH participants demonstrated higher cognitive functioning on all three measures compared to ONH participants. The slope of the identification function was steeper in YNH than in ONH participants, suggesting a clearer distinction between the contrasting words in the YNH participants. FFRs revealed better response waveform morphology and more robust phase-locking in YNH compared to ONH participants. ONH participants also exhibited earlier latencies for CAEP components compared to the YNH participants. Linear regression analyses revealed that cortical processing significantly contributed to the variance in perceptual performance in the WHEAT/WEED identification functions. These results suggest that reduced neural precision contributes to age-related speech perception difficulties that arise from temporal processing deficits.
Article
Full-text available
A successful Hearing-Aid Fitting (HAF) is more than just selecting an appropriate Hearing Aid (HA) device for a patient with Hearing Loss (HL). The initial fitting is given by the prescription based on user’s hearing loss; however, it is often necessary for the audiologist to readjust some parameters to satisfy the user demands. Therefore, in this paper, we concentrated on a new application of Neural Network (NN) combined with a Transfer Learning (TL) strategy to develop a fitting algorithm with the prescription database for hearing loss and readjusted gain to minimize the gap between fitting satisfaction. As prior information, we generated the data set from two popular hearing-aid fitting software, then fed the training data to our proposed model, and verified the performance of the architecture. Pondering real life circumstances, where numerous fitting records may not always be accessible, we first investigated the number of minimum fitting records required for possible sufficient training. After that, we evaluated the performance of the proposed algorithm in two phases: (a) NN with refined hyper parameter showed enhanced performance in compare to state-of-the-art DNN approach, and (b) the TL approach boosted the performance of the NN algorithm in a broad way. Altogether, our model provides a pragmatic and promising tool for HAF.
Article
Full-text available
The technology of reading human mental states is a leading innovation in the biomedical engineering field. EEG signal processing is going to help us to explore the uniqueness of brain signal that carries thousands of information in human being. The aim of this study is to analyze brain signal features between pleasure and displeasure mental state. Brainwaves is divided into 5 sub frequency bands namely alpha (8 – 13 Hz), beta (13 – 30 Hz), gamma (30 – 100 Hz), theta (4 – 8 Hz) and delta (1 – 4 Hz). However, in this study, alpha and beta waves were analyzed to investigate the mental states. Twenty subjects were recruited from undergraduate engineering student’s education background in UniMAP with age ranging between 19 to 23 years old. The subject must be healthy and right-handed. The subject was required to view a series of pleasure and displeasure images for 10 minutes and take rest for 30 seconds between pleasure and displeasure view. Truscan EEG device (Deymed Diagnostic, Alien Technic, Czech Republic) with 19 channels were used to acquire EEG data with frequency sampling of 1024 Hz and impedance is kept below 5 kΩ. A bandpass filter was used to extract alpha and beta waves. The signal was segmented and PSD value using Welch and Burg method was calculated for both mental states. 7 statistical features (mean, mode, median, variance, standard deviation, minimum and maximum) were obtained from PSD value and used as an input for the classifier. K-Nearest Neighbour (KNN) and Linear Discriminant Analysis (LDA) were used to classify into two mental states. As a result, Welch method gives the highest classification accuracy which is 99.3 % for alpha waves followed by 97.5 % for beta waves from channel F4. It can be concluded that alpha waves are the most potential waves to be used in order to differentiate pleasure and displeasure features.
Article
Full-text available
The classification of brain response signals as per human hearing ability is a complex undertaking. This study presents a novel formulated index for accurately predicting and classifying human hearing abilities based on the auditory brain responses. Moreover, we presented five classification algorithms to classify hearing abilities [normal hearing and sensorineural hearing loss (SNHL)] based on different auditory stimuli. The brain response signals used were the electroencephalography (EEG) evoked by two auditory stimuli (tones and consonant vowels stimulus). The study was carried out on Malaysian (Malay) citizens with and without normal hearing abilities. A new ranking process for the subjects’ EEG data and as well as ranking the nonlinear features will be used to obtain the maximum classification accuracy. The study formulated classification indices (CVHI, PTHI&HAI); these classification indices classify human hearing abilities based on the brain auditory responses using features in its numerical values. The K-nearest neighbor and support vector machine classifiers were quite accurate in classifying auditory brain responses for brain hearing abilities. The proposed indices are valuable tools for classifying brain responses, especially in the context of human hearing abilities. KEYWORDS: Cortical Auditory Evoked Potentials (CAEPs) ; Empirical mode decomposition (EMD); EEG; Auditory stimuli; Hearing disorder; brain responses; Malay; Chinese; Medical conditions; Ethnicity. https://link.springer.com/article/10.1007/s13369-019-03835-5
Article
Full-text available
OBJECTIVES: This study uses a new approach for classifying human ethnicity according to the auditory brain responses (electroencephalography [EEG] signals) with a high level of accuracy. Moreover, the study presents three different algorithms used to classify human ethnicity using auditory brain responses. The algorithms were tested on Malays and Chinese as a case study. MATERIALS AND METHODS: The EEG signal was used as a brain response signal, which was evoked by two auditory stimuli (Tones and Consonant Vowels stimulus). The study was carried out on Malaysians (Malay and Chinese) with normal hearing and with hearing loss. A ranking process for the subjects’ EEG data and the nonlinear features was used to obtain the maximum classification accuracy. RESULTS: The study formulated the classification of the Normal Hearing Ethnicity Index and Sensorineural Hearing Loss Ethnicity Index. These indices classified the human ethnicity according to brain auditory responses by using numerical values of response signal features. Three classification algorithms were used to verify human ethnicity. Support Vector Machine (SVM) classified the human ethnicity with an accuracy of 90% in the cases of normal hearing and sensorineural hearing loss (SNHL); the SVM classified with an accuracy of 84%. CONCLUSION: The classification indices categorized or separated the human ethnicity in both hearing cases of normal hearing and SNHL with high accuracy. The SVM classifier provided good accuracy in the classification of the auditory brain responses. The proposed indices might constitute valuable tools for the classification of the brain responses according to human ethnicity. KEYWORDS: Cortical Auditory Evoked Potentials (CAEPs); regression; Support Vector Machine; Sensorineural hearing loss; Human Ethnicity Determination; Electroencephalography; Auditory Brain Responses.
Article
Full-text available
Objectives: To demonstrate the feasibility of developing machine learning models for the prediction of hearing impairment in humans exposed to complex non-Gaussian industrial noise. Design: Audiometric and noise exposure data were collected on a population of screened workers (N = 1,113) from 17 factories located in Zhejiang province, China. All the subjects were exposed to complex noise. Each subject was given an otologic examination to determine their pure-tone hearing threshold levels and had their personal full-shift noise recorded. For each subject, the hearing loss was evaluated according to the hearing impairment definition of the National Institute for Occupational Safety and Health. Age, exposure duration, equivalent A-weighted SPL (LAeq), and median kurtosis were used as the input for four machine learning algorithms, that is, support vector machine, neural network multilayer perceptron, random forest, and adaptive boosting. Both classification and regression models were developed to predict noise-induced hearing loss applying these four machine learning algorithms. Two indexes, area under the curve and prediction accuracy, were used to assess the performances of the classification models for predicting hearing impairment of workers. Root mean square error was used to quantify the prediction performance of the regression models. Results: A prediction accuracy between 78.6 and 80.1% indicated that the four classification models could be useful tools to assess noise-induced hearing impairment of workers exposed to various complex occupational noises. A comprehensive evaluation using both the area under the curve and prediction accuracy showed that the support vector machine model achieved the best score and thus should be selected as the tool with the highest potential for predicting hearing impairment from the occupational noise exposures in this study. The root mean square error performance indicated that the four regression models could be used to predict noise-induced hearing loss quantitatively and the multilayer perceptron regression model had the best performance. Conclusions: This pilot study demonstrated that machine learning algorithms are potential tools for the evaluation and prediction of noise-induced hearing impairment in workers exposed to diverse complex industrial noises.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
Article
Full-text available
Objective. Most current electroencephalography (EEG)-based brain–computer interfaces (BCIs) are based on machine learning algorithms. There is a large diversity of classifier types that are used in this field, as described in our 2007 review paper. Now, approximately ten years after this review publication, many new algorithms have been developed and tested to classify EEG signals in BCIs. The time is therefore ripe for an updated review of EEG classification algorithms for BCIs. Approach. We surveyed the BCI and machine learning literature from 2007 to 2017 to identify the new classification approaches that have been investigated to design BCIs. We synthesize these studies in order to present such algorithms, to report how they were used for BCIs, what were the outcomes, and to identify their pros and cons. Main results. We found that the recently designed classification algorithms for EEG-based BCIs can be divided into four main categories: adaptive classifiers, matrix and tensor classifiers, transfer learning and deep learning, plus a few other miscellaneous classifiers. Among these, adaptive classifiers were demonstrated to be generally superior to static ones, even with unsupervised adaptation. Transfer learning can also prove useful although the benefits of transfer learning remain unpredictable. Riemannian geometry-based methods have reached state-of-the-art performances on multiple BCI problems and deserve to be explored more thoroughly, along with tensor-based methods. Shrinkage linear discriminant analysis and random forests also appear particularly useful for small training samples settings. On the other hand, deep learning methods have not yet shown convincing improvement over state-of-the-art BCI methods. Significance. This paper provides a comprehensive overview of the modern classification algorithms used in EEG-based BCIs, presents the principles of these methods and guidelines on when and how to use them. It also identifies a number of challenges to further advance EEG classification in BCI.
Article
Full-text available
The deficiency in the human auditory system of individuals suffering from sensorineural hearing loss (SNHL) is known to be associated with the difficulty in detecting of various speech phonological features that are frequently related to speech perception. This study investigated the effects of speech articulation features on the amplitude and latency of cortical auditory evoked potential (CAEP) components. The speech articulation features included the placing contrast and voicing contrast. 12 Malay subjects with normal hearing and 12 Malay subjects with SNHL were recruited for the study. The CAEPs response recorded at higher amplitude with longer latency when stimulated by voicing contrast cues compared to that of the placing contrast. Subjects with SNHL elicited greater amplitude with prolonged latencies in the majority of the CAEP components in both speech stimuli. The existence of different frequency spectral and time-varying acoustic cues of the speech stimuli was reflected by the CAEPs response strength and timing. We anticipate that the CAEPs responses could equip audiologist and clinicians with useful knowledge, concerning the potential deprivation experience by hearing impaired individuals, in auditory passive perception. This would help to determine what type of speech stimuli that might be useful in measuring speech perception abilities, especially in Malay Malaysian ethic group, for choosing a better rehabilitation program, since no such study conducted for evaluating speech perception among Malaysian clinical population.
Article
Full-text available
The nature of manufacturing systems faces ever more complex, dynamic and at times even chaotic behaviors. In order to being able to satisfy the demand for high-quality products in an efficient manner, it is essential to utilize all means available. One area, which saw fast pace developments in terms of not only promising results but also usability, is machine learning. Promising an answer to many of the old and new challenges of manufacturing, machine learning is widely discussed by researchers and practitioners alike. However, the field is very broad and even confusing which presents a challenge and a barrier hindering wide application. Here, this paper contributes in presenting an overview of available machine learning techniques and structuring this rather complicated area. A special focus is laid on the potential benefit, and examples of successful applications in a manufacturing environment.
Article
Full-text available
Introduction The impact of auditory sensory deprivation in the life of an individual is enormous because it not only affects one's ability to properly understand auditory information, but also the way people relate to their environment and their culture. The monitoring of adult and elderly subjects with hearing loss is intended to minimize the difficulties and handicaps that occur as a consequence of this pathology. Objective To evaluate the level of user satisfaction with hearing aids. Methods A clinical and experimental study involving 91 elderly hearing aid users. We used the questionnaire Satisfaction with Amplification in Daily Life to determine the degree of the satisfaction provided by hearing aids. We evaluated mean global score, subscales, as well as the variables time to use, age, and degree of hearing loss. Results Mean global score was 4.73, the score for Positive Effects 5.45, Negative Factors 3.2, demonstrating that they were satisfied; Services and Costs 5.98: very satisfied ; 3.65 Personal Image: dissatisfied. We observed statistically significant difference for the time of hearing aid use, age, and degree of hearing loss. Conclusion The SADL is a tool, simple and easy to apply and in this study we can demonstrate the high degree of satisfaction with the hearing aids by the majority of the sample collected, increasing with time of use and a greater degree of hearing loss.
Article
Full-text available
Introduction Hearing difficulties can be minimized by the use of hearing aids. Objective The objective of this study is to assess the speech perception and satisfaction of hearing aids users before and after aid adaptation and to determine whether these measures are correlated. Methods The study was conducted on 65 individuals, 54% females and 46% males aged 63 years on average, after the systematic use of hearing aids for at least three months. We characterized subjectś personal identification data, the degree, and configuration of hearing loss, as well as aspects related to adaptation. We then applied a satisfaction questionnaire and a speech perception test (words and sentences), with and without the use of the hearing aids. Results Mean speech recognition with words and sentences was 69% and 79%, respectively, with hearing aids use; whereas, without hearing aids use the figures were 43% and 53%. Mean questionnaire score was 30.1 points. Regarding hearing loss characteristics, 78.5% of the subjects had a sensorineural loss, 20% a mixed loss, and 1.5% a conductive loss. Hearing loss of moderate degree was present in 60.5% of cases, loss of descending configuration in 47%, and plain loss in 37.5%. There was no correlation between individual satisfaction and the percentages of the speech perception tests applied. Conclusion Word and sentence recognition was significantly better with the use of the hearing aids. The users showed a high degree of satisfaction. In the present study, there was no correlation observed between the levels of speech perception and levels of user satisfaction measured with the questionnaire.
Article
Full-text available
An auditory loss is one of the most common disabilities present in newborns and infants in the world. A conventional hearing screening test's applicability is limited as it requires a feedback response from the subject under test. To overcome such problems, the primary focus of this study is to develop an intelligent hearing ability level assessment system using auditory evoked potential signals (AEP). AEP signal is a non-invasive tool that can reflect the stimulated interactions with neurons along the stations of the auditory pathway. The AEP responses of fourteen normal hearing subjects to auditory stimuli (20 dB, 30 dB, 40 dB, 50 dB and 60 dB) were derived from electroencephalogram (EEG) recordings. Higuchi's fractal method is applied to extract the fractal features from the recorded AEP signals. The extracted fractal features were then associated to different hearing perception levels of the subjects. Feed-forward and feedback neural networks are employed to distinguish the different hearing perception levels. The performance of the proposed intelligent hearing ability level assessment found to exceed 85% accuracy. This study indicates that AEP responses to the auditory stimuli to the normal hearing persons can predict the higher order auditory stimuli followed by the lower order auditory stimuli and consequently the state of auditory development of subjects
Article
Full-text available
Depression is a mental disorder characterized by persistent occurrences of lower mood states in the affected person. The electroencephalogram (EEG) signals are highly complex, nonlinear, and nonstationary in nature. The characteristics of the signal vary with the age and mental state of the subject. The signs of abnormality may be invisible to the naked eyes. Even when they are visible, deciphering the minute changes indicating abnormality is tedious and time consuming for the clinicians. This paper presents a novel method for automated EEG-based diagnosis of depression using nonlinear methods: fractal dimension, largest Lyapunov exponent, sample entropy, detrended fluctuation analysis, Hurst's exponent, higher order spectra, and recurrence quantification analysis. A novel Depression Diagnosis Index (DDI) is presented through judicious combination of the nonlinear features. The DDI calculated automatically based on the EEG recordings can be used to diagnose depression objectively using just one numeric value. Also, these features extracted from nonlinear methods are ranked using the t value and fed to the support vector machine (SVM) classifier. The SVM classifier yielded the highest classification performance with an average accuracy of about 98%, sensitivity of about 97%, and specificity of about 98.5%.
Article
Full-text available
A scoping review focused on background sounds and adult hearing-aid users, including aspects of aversiveness and interference. The aim was to establish the current body of knowledge, identify knowledge gaps, and to suggest possible future directions for research. Data were gathered using a systematic search strategy, consistent with scoping review methodology. Searches of public databases between 1988 and 2014 returned 1182 published records. After exclusions for duplicates and out-of- scope works, 75 records remained for further analysis. Content analysis was used to group the records into five separate themes. Content analysis indicated numerous themes relating to background sounds. Five broad emergent themes addressed the development and validation of outcome instruments, satisfaction surveys, assessments of hearing-aid technology and signal processing, acclimatization to the device post-fitting, and non-auditory influences on benefit and satisfaction. A large proportion of hearing-aid users still find particular hearing-aid features and attributes dissatisfying when listening in background sounds. Many conclusions are limited by methodological drawbacks in study design and too many different outcome instruments. Future research needs to address these issues, while controlling for hearing-aid fitting.
Article
Full-text available
Hypoacusis is the most prevalent sensory disability in the world and consequently, it can lead to impede speech in human beings. One best approach to tackle this issue is to conduct early and effective hearing screening test using Electroencephalogram (EEG). EEG based hearing threshold level determination is most suitable for persons who lack verbal communication and behavioral response to sound stimulation. Auditory evoked potential (AEP) is a type of EEG signal emanated from the brain scalp by an acoustical stimulus. The goal of this review is to assess the current state of knowledge in estimating the hearing threshold levels based on AEP response. AEP response reflects the auditory ability level of an individual. An intelligent hearing perception level system enables to examine and determine the functional integrity of the auditory system. Systematic evaluation of EEG based hearing perception level system predicting the hearing loss in newborns, infants and multiple handicaps will be a priority of interest for future research.
Article
Full-text available
A fundamental goal of the human auditory system is to map complex acoustic signals onto stable internal representations of the basic sound patterns of speech. Phonemes and the distinctive features that they comprise constitute the basic building blocks from which higher-level linguistic representations, such as words and sentences, are formed. Although the neural structures underlying phonemic representations have been well studied, there is considerable debate regarding frontal-motor cortical contributions to speech as well as the extent of lateralization of phonological representations within auditory cortex. Here we used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis to investigate the distributed patterns of activation that are associated with the categorical and perceptual similarity structure of 16 consonant exemplars in the English language used in Miller and Nicely's (1955) classic study of acoustic confusability. Participants performed an incidental task while listening to phonemes in the MRI scanner. Neural activity in bilateral anterior superior temporal gyrus and supratemporal plane was correlated with the first two components derived from a multidimensional scaling analysis of a behaviorally derived confusability matrix. We further showed that neural representations corre-sponding to the categorical features of voicing, manner of articulation, and place of articulation were widely distributed throughout bilateral primary, secondary, and association areas of the superior temporal cortex, but not motor cortex. Although classification of phonological features was generally bilateral, we found that multivariate pattern information was moderately stronger in the left com-pared with the right hemisphere for place but not for voicing or manner of articulation.
Article
Full-text available
Hearing threshold estimation based on cortical auditory evoked potentials (CAEPs) has been applied for some decades. However, available research is scarce evaluating the accuracy of this technique with an automated paradigm for the objective detection of CAEPs. To determine the difference between behavioral and CAEP thresholds detected using an objective paradigm based on the Hotelling's T² statistic. To propose a decision tree to choose the next stimulus level in a sample of hearing-impaired adults. This knowledge potentially could increase the efficiency of clinical hearing threshold testing. Correlational cohort study. Thresholds obtained behaviorally were compared with thresholds obtained through cortical testing. Thirty-four adults with hearing loss participated in this study. For each audiometric frequency and each ear, behavioral thresholds were collected with both pure-tone and 40-msec tone-burst stimuli. Then, corresponding cortical hearing thresholds were determined. An objective cortical-response detection algorithm based on the Hotelling's T² statistic was applied to determine response presence. A decision tree was used to select the next stimulus level. In total, 241 behavioral-cortical threshold pairs were available for analysis. The differences between CAEP and behavioral thresholds (and their standard deviations [SDs]) were determined for each audiometric frequency. Cortical amplitudes and electroencephalogram noise levels were extracted. The practical applicability of the decision tree was evaluated and compared to a Hughson-Westlake paradigm. It was shown that, when collapsed over all audiometric frequencies, behavioral pure-tone thresholds were on average 10 dB lower than 40-msec cortical tone-burst thresholds, with an SD of 10 dB. Four percent of CAEP thresholds, all obtained from just three individual participants, were more than 30 dB higher than their behavioral counterparts. The use of a decision tree instead of a Hughson-Westlake procedure to obtain a CAEP threshold did not seem to reduce test time, but there was significantly less variation in the number of CAEP trials needed to determine a threshold. Behavioral hearing thresholds in hearing-impaired adults can be determined with an acceptable degree of accuracy (mean threshold correction and SD of both 10 dB) using an objective statistical cortical-response detection algorithm in combination with a decision tree to determine the test levels. American Academy of Audiology.
Article
Full-text available
High-dimensional large sample data sets, between feature variables and between samples, may cause some correlative or repetitive factors, occupy lots of storage space, and consume much computing time. Using the Elman neural network to deal with them, too many inputs will influence the operating efficiency and recognition accuracy; too many simultaneous training samples, as well as being not able to get precise neural network model, also restrict the recognition accuracy. Aiming at these series of problems, we introduce the partial least squares (PLS) and cluster analysis (CA) into Elman neural network algorithm, by the PLS for dimension reduction which can eliminate the correlative and repetitive factors of the features. Using CA eliminates the correlative and repetitive factors of the sample. If some subclass becomes small sample, with high-dimensional feature and fewer numbers, PLS shows a unique advantage. Each subclass is regarded as one training sample to train the different precise neural network models. Then simulation samples are discriminated and classified into different subclasses, using the corresponding neural network to recognize it. An optimized Elman neural network classification algorithm based on PLS and CA (PLS-CA-Elman algorithm) is established. The new algorithm aims at improving the operating efficiency and recognition accuracy. By the case analysis, the new algorithm has unique superiority, worthy of further promotion.
Article
The use of machine learning (ML) in healthcare raises numerous ethical concerns, especially as models can amplify existing health inequities. Here, we outline ethical considerations for equitable ML in the advancement of healthcare. Specifically, we frame ethics of ML in healthcare through the lens of social justice. We describe ongoing efforts and outline challenges in a proposed pipeline of ethical ML in health, ranging from problem selection to postdeployment considerations. We close by summarizing recommendations to address these challenges.
Article
Successful mapping of meaningful labels to sound input requires accurate representation of that sound’s acoustic variances in time and spectrum. For some individuals, such as children or those with hearing loss, having an objective measure of the integrity of this representation could be useful. Classification is a promising machine learning approach which can be used to objectively predict a stimulus label from the brain response. This approach has been previously used with auditory evoked potentials (AEP) such as the frequency following response (FFR), but a number of key issues remain unresolved before classification can be translated into clinical practice. Specifically, past efforts at FFR classification have used data from a given subject for both training and testing the classifier. It is also unclear which components of the FFR elicit optimal classification accuracy. To address these issues, we recorded FFRs from 13 adults with normal hearing in response to speech and music stimuli. We compared labeling accuracy of two cross-validation classification approaches using FFR data: (1) a more traditional method combining subject data in both the training and testing set, and (2) a “leave-one-out” approach, in which subject data is classified based on a model built exclusively from the data of other individuals. We also examined classification accuracy on decomposed and time-segmented FFRs. Our results indicate that the accuracy of leave-one-subject-out cross validation approaches that obtained in the more conventional cross-validation classifications while allowing a subject’s results to be analysed with respect to normative data pooled from a separate population. In addition, we demonstrate that classification accuracy is highest when the entire FFR is used to train the classifier. Taken together, these efforts contribute key steps toward translation of classification-based machine learning approaches into clinical practice.
Conference Paper
Hearing loss or hearing impairment is the primary reason of deafness throughout the world. Hearing impairment can occur to one or both the ears. If hearing loss is identified in time, it can be minimized by practicing specific precautions. In this paper, we investigate the likelihood of detection of hearing loss through auditory system responses. Auditory perception and human age are highly interrelated. Likewise, detecting a significant gap within the real age and the estimated age, the hearing loss can easily be identified. Our proposed system for human age estimation has promising results with a Root Mean Square Error (RMSE) value of 4.1 years, and classification performance efficiency for hearing loss is 94\%, showing the applicability of our approach for detection of hearing loss.
Article
Electroencephalography (EEG) has been a staple method for identifying certain health conditions in patients since its discovery. Due to the many different types of classifiers available to use, the analysis methods are also equally numerous. In this review, we will be examining specifically machine learning methods that have been developed for EEG analysis with bioengineering applications. We reviewed literature from 1988 to 2018 to capture previous and current classification methods for EEG in multiple applications. From this information, we are able to determine the overall effectiveness of each machine learning method as well as the key characteristics. We have found that all the primary methods used in machine learning have been applied in some form in EEG classification. This ranges from Naive-Bayes to Decision Tree/Random Forest, to Support Vector Machine (SVM). Supervised learning methods are on average of higher accuracy than their unsupervised counterparts. This includes SVM and KNN. While each of the methods individually is limited in their accuracy in their respective applications, there is hope that the combination of methods when implemented properly has a higher overall classification accuracy. This paper provides a comprehensive overview of Machine Learning applications used in EEG analysis. It also gives an overview of each of the methods and general applications that each is best suited to.
Article
This article examines and evaluates methods, from an audiologist's perspective, of reducing common complaints with conventional hearing aids and issues such as the occlusion effect, acoustic feedback, discomfort, and insufficient gain. Although often successful, reducing one problem may have the tradeoff of causing another issue. This article is meant to provide information to the reader regarding modern conventional hearing aids, the means to alleviate common problems in the clinic, and when middle ear implants and osseointegrated implants can be beneficial.
Article
Cortical auditory evoked potentials represent summation of neural activity in the auditory pathways in response to sounds. They provide an objective measure of the brain's response to sound. For this reason, they are an effective tool for scientists and audiologists for investigating auditory function in normal people and those with hearing loss. The main objective of this study is to determine what components among the P1, N1, P2, N2, or P3 are most beneficial in assessing the speech detection and discrimination abilities of adult sensorineural hearing loss population. This study also intends to investigate whether changes in the amplitudes and latencies of these components occurring with sensorineural hearing loss and hearing AIDS differ in responses reflecting different stages of auditory processing. Auditory Potentials were recorded to /ba/ and /da/ stimuli from two Malay adult groups. A control group of 12 right-handed having normal hearing and a group of 10 right-handed with sensorineural hearing loss. The results showed that P2 and P3 components had the most benefits from the use of hearing AIDS in the hearing loss subjects and therefore could be used in both clinical and research applications as a predictor and objective indicator of hearing AIDS performance in speech perception. The study also showed that the brain processes both stimuli in a different pattern for both the normal and the aided hearing loss subjects. The present study could provide more diagnostic information for clinicians and could also offer better speech perception benefits for hearing-impaired individuals from their personal hearing AIDS. The findings also suggest that the aided hearing loss subjects, despite the benefits they get from the hearing AIDS, find it difficult to detect and discriminate the acoustic differences between the two speech stimuli.
Article
Autistic Spectrum Disorder (ASD) is a mental disorder that retards acquisition of linguistic, communication, cognitive, and social skills and abilities. Despite being diagnosed with ASD, some individuals exhibit outstanding scholastic, non-academic, and artistic capabilities, in such cases posing a challenging task for scientists to provide answers. In the last few years, ASD has been investigated by social and computational intelligence scientists utilizing advanced technologies such as machine learning to improve diagnostic timing, precision, and quality. Machine learning is a multidisciplinary research topic that employs intelligent techniques to discover useful concealed patterns, which are utilized in prediction to improve decision making. Machine learning techniques such as support vector machines, decision trees, logistic regressions, and others, have been applied to datasets related to autism in order to construct predictive models. These models claim to enhance the ability of clinicians to provide robust diagnoses and prognoses of ASD. However, studies concerning the use of machine learning in ASD diagnosis and treatment suffer from conceptual, implementation, and data issues such as the way diagnostic codes are used, the type of feature selection employed, the evaluation measures chosen, and class imbalances in data among others. A more serious claim in recent studies is the development of a new method for ASD diagnoses based on machine learning. This article critically analyses these recent investigative studies on autism, not only articulating the aforementioned issues in these studies but also recommending paths forward that enhance machine learning use in ASD with respect to conceptualization, implementation, and data. Future studies concerning machine learning in autism research are greatly benefitted by such proposals.
Article
Objective: To examine cortical auditory evoked potentials (CAEPs) and behavioural measures of spatial speech in noise recognition, sound localization and self-reported perception of hearing performance before and after surgical removal of an acoustic neuroma, and to monitor changes over time after surgery. Methods: CAEPs in noise were recorded and auditory skills were assessed using tests of sound localization, spatial speech perception in noise and self-ratings of auditory abilities (Speech, Spatial and Qualities of Hearing questionnaire, SSQ) in a male adult with single-sided deafness due to acoustic neuroma removal. Measurements took place at 2, 6 and 12 months after surgery. Results: The pattern of CAEP responses, behavioural measurements and self-reported perception after surgery differed from the pre-surgery baseline and changed over time after surgery. Conclusions: The participant experienced considerable listening fatigue and deficits in auditory skills after losing hearing in one ear. Different patterns of change in CAEPs and other measures over time suggest multiple physiological mechanisms for auditory plasticity after acute onset of single sided deafness.
Article
The increasing availability of electronic health data presents a major opportunity in healthcare for both discovery and practical applications to improve healthcare. However, for healthcare epidemiologists to best utilize these data, computational techniques that can handle large complex datasets are required. Machine learning (ML), the study of tools and methods for identifying patterns in data, can help. The appropriate application of ML to these data promises to transform patient risk stratification broadly in the field of medicine, and especially in infectious diseases. This, in turn, could lead to targeted interventions that reduce the spread of healthcare-associated pathogens. In this review, we begin with an introduction to the basics of ML. We then move on to discuss how ML can transform healthcare epidemiology, providing examples of successful applications. Finally, we present special considerations for those healthcare epidemiologists who want to use/apply ML.
Article
EEG signals have essential and important information about the brain and neural diseases. The main purpose of this study is classifying two groups of healthy volunteers and Multiple Sclerosis (MS) patients using nonlinear features of EEG signals while performing cognitive tasks. EEG signals were recorded when users were doing two different attentional tasks. One of the tasks was based on detecting a desired change in color luminance and the other task was based on detecting a desired change in direction of motion. EEG signals were analyzed in two ways: EEG signals analysis without rhythms decomposition and EEG sub-bands analysis. After recording and preprocessing, time delay embedding method was used for state space reconstruction; embedding parameters were determined for original signals and their sub-bands. Afterwards nonlinear methods were used in feature extraction phase. To reduce the feature dimension, scalar feature selections were done by using T-test and Bhattacharyya criteria. Then, the data were classified using linear support vector machines (SVM) and k-nearest neighbor (KNN) method. The best combination of the criteria and classifiers was determined for each task by comparing performances. For both tasks, the best results were achieved by using T-test criterion and SVM classifier. For the direction-based and the color-luminance-based tasks, maximum classification performances were 93.08 and 79.79% respectively which were reached by using optimal set of features. Our results show that the nonlinear dynamic features of EEG signals seem to be useful and effective in MS diseases diagnosis.
Article
Introduction Scalp‐recorded electrophysiological responses to complex, periodic auditory signals reflect phase‐locked activity from neural ensembles within the auditory system. These responses, referred to as frequency‐following responses (FFRs), have been widely utilized to index typical and atypical representation of speech signals in the auditory system. One of the major limitations in FFR is the low signal‐to‐noise ratio at the level of single trials. For this reason, the analysis relies on averaging across thousands of trials. The ability to examine the quality of single‐trial FFRs will allow investigation of trial‐by‐trial dynamics of the FFR, which has been impossible due to the averaging approach. Methods In a novel, data‐driven approach, we used machine learning principles to decode information related to the speech signal from single trial FFRs. FFRs were collected from participants while they listened to two vowels produced by two speakers. Scalp‐recorded electrophysiological responses were projected onto a low‐dimensional spectral feature space independently derived from the same two vowels produced by 40 speakers, which were not presented to the participants. A novel supervised machine learning classifier was trained to discriminate vowel tokens on a subset of FFRs from each participant, and tested on the remaining subset. Results We demonstrate reliable decoding of speech signals at the level of single‐trials by decomposing the raw FFR based on information‐bearing spectral features in the speech signal that were independently derived. Conclusions Taken together, the ability to extract interpretable features at the level of single‐trials in a data‐driven manner offers unchartered possibilities in the noninvasive assessment of human auditory function.
Article
Neurons in the auditory cortex synchronize their responses to temporal regularities in sound input. This coupling or "entrainment" is thought to facilitate beat extraction and rhythm perception in temporally structured sounds, such as music. As a consequence of such entrainment, the auditory cortex responds to an omitted (silent) sound in a regular sequence. Although previous studies suggest that the auditory brainstem frequency- following response exhibits some of the beat-related effects found in the cortex, it is unknown whether omissions of sounds evoke a brainstem response. We simultaneously recorded cortical and brainstem responses to isochronous and irregular sequences of consonant-vowel syllable /da/ that contained sporadic omissions. The auditory cortex responded strongly to omissions, but we found no evidence of evoked responses to omitted stimuli from the auditory brainstem. However, auditory brainstem responses in the isochronous sound sequence where more consistent across trials than in the irregular sequence. These results indicate that the auditory brainstem faithfully encodes short-term acoustic properties of a stimulus and is sensitive to sequence regularity, but does not entrain to isochronous sequences sufficiently to generate overt omission responses, even for sequences that evoke such responses in the cortex. These findings add to our understanding of the processing of sound regularities, which is an important aspect of human cognitive abilities like rhythm, music and speech perception.
Article
Background: Untreated sensorineural hearing loss (SNHL) is associated with chronic health-care conditions, isolation, loneliness, and reduced quality of life. Although hearing aids can minimize the negative effects of SNHL, only about one in five persons with SNHL seeks help for communication problems. Many persons wait 10 yr or more from the time they first notice a problem before pursuing amplification. Further, little information about the benefits of amplification is available for persons with mild SNHL (MSNHL), who likely defer treatment even longer. Purpose: To conduct a systematic review to weigh the evidence regarding benefits derived from the use of amplification by adults with MSNHL. Research design: Systematic review with meta-analysis. Study sample: Adult hearing aid wearers with bilateral average pure-tone thresholds ≤45 dB HL at 500, 1000, 2000, and 4000 Hz. Data collection and analysis: PubMed, Cumulative Index to Nursing and Allied-Health Literature, Cochrane Collaboration, and Google Scholar were searched independently by the authors during September 2013. The authors used a consensus approach to assess the quality and extract data for the meta-analysis. Results: Of 106 articles recovered for full-text review, only 10 met inclusion criteria (at least Level IV of evidence and involved and reported separate pre-/postfitting hearing aid outcomes for patients with MSNHL). Included studies involved mainly middle-aged to elderly patients using hearing aids of various styles and circuitry. Results from all of the studies indicated positive benefits from amplification for patients with MSNHL. Data from five studies were suitable for a meta-analysis, which produced a small-to-medium effect size of 0.85 (95% confidence intervals = 0.44-1.25) after adjusting for a small publication bias. This evidence confirmed benefits from the use of amplification in adults with MSNHL. Conclusions: Evidence exists supporting the notion that adults with MSNHL benefit from hearing aids. This information is important and useful to audiologists, patients, and third-party payers, even considering that most of the studies in this systematic review were limited, somewhat dated, and used analog and early digital technology available when the studies were conducted. Clinical recommendations may be even stronger as future studies become available for patients fit with modern styles and high-technology hearing aids.
Chapter
Age-related hearing loss is a multifactorial disorder that involves a variety of etiologies, anatomical alterations of the peripheral and central auditory system structures, and physiologic and behavioral consequences. This chapter provides a 20-year, broad historical perspective of major findings from animal and human research that has led to our current understanding of the nature of age-related hearing loss. Four principal domains are considered: (1) epidemiology,(2) models of presbycusis, (3) speech understanding performance, and (4) training to improve communication in real-world environments. Corresponding influences of supportive and distracting visual information, as well as age-related cognitive decline on auditory performance are also reviewed. The chapter culminates in a discussion of several emerging areas of research that should enable audiologists and hearing scientists to design creative technological and rehabilitative solutions to the most intractable auditory deficits reported by older people with hearing loss.
Article
Early prediction of person at risk of Sudden Cardiac Death (SCD) with or without the onset of Ventricular Tachycardia (VT) or Ventricular Fibrillation (VF) still remains a continuing challenge to clinicians. In this work, we have presented a novel integrated index for prediction of SCD with a high level of accuracy by using electrocardiogram (ECG) signals. To achieve this, nonlinear features (Fractal Dimension (FD), Hurst’s exponent (H), Detrended Fluctuation Analysis (DFA), Approximate Entropy (ApproxEnt), Sample Entropy (SampEnt), and Correlation Dimension (CD)) are first extracted from the second level Discrete Wavelet Transform (DWT) decomposed ECG signal. The extracted nonlinear features are ranked using t-value and then, a combination of highly ranked features are used in the formulation and employment of an integrated Sudden Cardiac Death Index (SCDI). This calculated novel SCDI can be used to accurately predict SCD (four minutes before the occurrence) by using just one numerical value four minutes before the SCD episode. Also, the nonlinear features are fed to the following classifiers: Decision Tree (DT), k-Nearest Neighbour (KNN), and Support Vector Machine (SVM). The combination of DWT and nonlinear analysis of ECG signals is able to predict SCD with an accuracy of 92.11% (KNN), 98.68% (SVM), 93.42% (KNN) and 92.11% (SVM) for first, second, third and fourth minutes before the occurrence of SCD, respectively. The proposed SCDI will constitute a valuable tool for the medical professionals to enable them in SCD prediction.
Article
This study investigated whether speech-evoked auditory brainstem responses (speech ABRs) can be automatically separated into distinct classes. With five English synthetic vowels, the speech ABRs were classified using linear discriminant analysis based on features contained in the transient onset response, the sustained envelope following response (EFR), and the sustained frequency following response (FFR). EFR contains components mainly at frequencies well below the first formant, while the FFR has more energy around the first formant. Accuracies of 83.33% were obtained for combined EFR and FFR features and 38.33% were obtained for transient response features. The EFR features performed relatively well with a classification accuracy of 70.83% despite the belief that vowel discrimination is primarily dependent on the formants. The FFR features obtained a lower accuracy of 59.58% possibly because the second formant is not well represented in all the responses. Moreover, the classification accuracy based on the transient features exceeded chance level which indicates that the initial response transients contain vowel specific information. The results of this study will be useful in a proposed application of speech ABR to objective hearing aid fitting, if the separation of the brain’s responses to different vowels is found to be correlated with perceptual discrimination.