ArticlePDF Available

Automatic EEG Processing for the Early Diagnosis of Traumatic Brain Injury

Authors:
  • T&S Engineering (ex-INEVA)

Abstract and Figures

Traumatic Brain Injury (TBI) is recognized as an important cause of death and disabilities after an accident. The availability a tool for the early diagnosis of brain dysfunctions could greatly improve the quality of life of people affected by TBI and even prevent deaths. The contribution of the paper is a process including several methods for the automatic processing of electroencephalography (EEG) data, in order to provide a fast and reliable diagnosis of TBI. Integrated in a portable decision support system called EmerEEG, the TBI diagnosis is obtained using discriminant analysis based on quantitative EEG (qEEG) features extracted from data recordings after the automatic removal of artifacts. The proposed algorithm computes the TBI diagnosis on the basis of a model extracted from clinically-labelled EEG records. The system evaluations have confirmed the speed and reliability of the processing algorithms as well as the system's ability to deliver accurate diagnosis. The developed algorithms have achieved 79.1% accuracy in removing artifacts, and 87.85% accuracy in TBI diagnosis. Therefore, the developed system enables a short response time in emergency situations and provides a tool the emergency services could base their decision upon, thus preventing possibly miss-diagnosed injuries.
Content may be subject to copyright.
Available online at www.sciencedirect.com
ScienceDirect
Procedia Computer Science 00 (2016) 000000
www.elsevier.com/locate/procedia
1877-0509 © 2016 The Authors. Published by Elsevier B.V.
Peer-review under responsibility of KES International.
20th International Conference on Knowledge Based and Intelligent Information and Engineering
Systems, KES2016, 5-7 September 2016, York, United Kingdom
Automatic EEG processing for the early diagnosis of Traumatic
Brain Injury
Bruno Alberta, Jingjing Zhanga, Alexandre Noyvirtb, Rossitza Setchia, Haldor Sjaaheimb,
Svetla Velikovab, Frode Strislandc
aSchool of Engineering, Cardiff University, Cardiff CF24 3AA, UK, Applied Automation, Bridgend, UK, c Smartbrain, Oslo, d SINTEF ICT, Oslo
Norway
Abstract
Traumatic Brain Injury (TBI) is recognized as an important cause of death and disabilities after an accident. The availability a tool
for the early diagnosis of brain dysfunctions could greatly improve the quality of life of people affected by TBI and even prevent
deaths. The contribution of the paper is a process including several methods for the automatic processing of electroencephalography
(EEG) data, in order to provide a fast and reliable diagnosis of TBI. Integrated in a portable decision support system called
EmerEEG, the TBI diagnosis is obtained using discriminant analysis based on quantitative EEG (qEEG) features extracted from
data recordings after the automatic removal of artifacts. The proposed algorithm computes the TBI diagnosis on the basis of a
model extracted from clinically-labelled EEG records. The system evaluations have confirmed the speed and reliability of the
processing algorithms as well as the system’s ability to deliver accurate diagnosis. The developed algorithms have achieved 79.1%
accuracy in removing artifacts, and 87.85% accuracy in TBI diagnosis. Therefore, the developed system enables a short response
time in emergency situations and provides a tool the emergency services could base their decision upon, thus preventing possibl y
miss-diagnosed injuries.
© 2016 The Authors. Published by Elsevier B.V.
Peer-review under responsibility of KES International.
Keywords: Artifact removal, Diagnosis; Electroencephalography (EEG); Portable Medical System; Traumatic Brain Injury (TBI).
1. Introduction
Traumatic brain injury (TBI) is caused by an external force that damages the brain. This brain dysfunction results
as possible physical, cognitive, social, emotional, and behavioral effects on the subject1. The severity of the injury
ranges from mild to severe as well as the associated impacts on the quality of life of the person with TBI2,3,4,5. TBI has
2 Author name / Procedia Computer Science 00 (2016) 000000
been recognized as an important cause of death in the US6 as well as in Europe7. Moreover, it leads to a great economic
burden7.
Irreversible brain damages can result from a trauma that is not properly diagnosed, or too late. Hence there is a
need for a reliable tool that can be used by emergency services in order to obtain a quick diagnosis of TBI at the place
of injury. However, current methods and devices that provide TBI diagnosis are limited to clinical environments. In
particular, contrary to other medical imagery technologies, Electroencephalography (EEG) techniques have the
potential for being used in a portable way. In addition, Quantitative Electroencephalography (qEEG) is a sensitive
diagnostic method of brain injury after mild head injury. It has shown over 80% accuracy in discriminating between
normal and traumatic brain-injured subjects2,3,4.
The EmerEEG project addresses this problem by proposing a portable decision support system based on EEG
technology for early diagnosis of TBI at the point of need. This system includes a head device for fast and simple
acquisition of EEG data during emergencies, as well as necessary devices enabling processing power, interfacing and
communication capabilities. This paper focuses on the processing part of the system, which, once integrated to the
rest of the system, provides a tool for the automatic diagnosis of TBI and decision support. The idea is to enable
anyone from the emergency services with minimal training to assess the severity of a brain injury.
The remainder of the paper is organized as follows. Related processing and diagnostic techniques, are reviewed in
section 2. Section 3 outlines the EEG processing method and TBI diagnosis. Section 4 describes the evaluation of the
system in terms of the quality of the EEG pre-processing and TBI diagnostics. Finally, section 5 summarizes the paper
and highlights future work.
2. Literature review
This section reviews methods for EEG data processing and TBI diagnosis.
The clinical criterion most widely used to classify TBI severity is the Glasgow Coma Scale (GCS), which grades
the condition of a patient on a scale from 3 to 15 based on verbal, motor, and eye reactions to stimuli8,9. However, the
GCS is a qualitative method of assessment, which has its limitations. Advanced neuroimaging techniques like
Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are now widely used in hospitals for the
assessment of neurological damage. The size and non-portability of the equipment, in addition to their limitations in
diagnosing mild TBI10,11,12, however, constrain their use in portable systems. By comparison, the EEG technique
provides a direct measurement of brain activity without the need for external radiation or injected substances.
Rather than only analyzing raw recordings through visual inspection, the extraction of quantitative EEG data, such
as frequency and coherence, has shown its relevance in recent years in supplying relevant and re-producible features
for the development of diagnostic tools13. The discriminant accuracy of qEEG is reported as 95.67% in the detection
of mild head injury3 and 75.8% in predicting the outcome one year after the injury14. Moreover, qEEG demonstrates
96.39% classification accuracy, 95.45% sensitivity and 97.44% specificity in discriminating between groups with
mild and severe TBI4. The EEG discriminant score is also used to measure intermediate severity in moderate TBI
patients. Significant correlations between EEG discriminant scores, emergency admission measures, and post-trauma
neuropsychological test scores have validated the discriminant function as an index of severity of injury and a classifier
of the extremes of severity4.
The procedure for computing a TBI diagnosis using EEG data normally involves pre-processing the raw recording
to reduce the impact of the low signal-to-noise ratio and to obtain a more accurate representation of the pure brain
activity. Artifacts are the most important cause of noise once errors directly due to the instrumentation have been
eliminated. Artifacts are electrical signals detected along the scalp that do not arise from the cerebra. Typical artifacts
include electrocardiography (ECG) artifacts caused by heart beats15, ocular artifacts (EOG) caused by eye blinks or
low-frequency patterns caused by eye movements16,17, and muscle activity (EMG) caused by movements of the head,
body, jaws, or tongue. EOG and EMG activities are unavoidable in EEG recording16,17,18. Conventional clinical
approaches reduce noise by discarding epochs with artifacts through visual inspection by specialists. This manual
process is time-consuming and subject to intra-observation differences, and useful information of the brain activity
embedded in the discarded epochs might be lost.
An effective and popular alternative is the use of Independent Component Analysis (ICA), which separates the
artifacts from the EEG signals without removing epochs19,20. However, components corresponding to the artifacts
Author name / Procedia Computer Science 00 (2016) 000000 3
have to be carefully selected for this method to be effective. Therefore, the key to achieving automatic artifact removal
is to find a method that automatically selects artifact components from the brain activity after separation with ICA.
Spatial, spectral, temporal, and statistical features have been combined to identify artifacts15,21,22,23,24. An alternative
approach to automatic artifact removal is a novel technique, named Automatic Wavelet Independent Component
Analysis (AWICA)25. It combines wavelet transform and ICA based on the estimation of kurtosis and Renyi’s entropy.
This is done in a two-step procedure, instead of applying wavelet analysis after ICA26. One important advantage of
this method is that it suppresses artifact components while reducing the loss of residual informative data, since the
components related to relevant EEG activity are mostly preserved25.
3. EEG processing and TBI diagnosis
3.1. Offline and online operation
This section describes the proposed algorithms for automatic detection of traumatic brain injury. The algorithms
are based on data signal processing techniques and a classification approach. The aim is to alert the local operator and
the remote telemedicine personnel when TBI is detected. The flowchart in Fig. 1 shows the processes for computing
the TBI diagnosis online and for training the model offline using pre-recorded data. The two processes share the same
pre-processing and qEEG feature extraction steps. The offline process performs these steps for all recordings from the
clinical database and constructs a model using machine learning, whereas the online process applies these steps on the
continuous recording coming from the portable sub-system and extracts a predicted diagnosis from the trained model.
Fig. 1. Offline model training and online EEG processing and TBI diagnosis.
The continuous EEG acquisition and online processing starts after the montage of the electrodes on the patient’s
head is completed and the electrical contact is assured. The raw EEG recording is first pre-processed by filtering high
frequency noise and removing artifacts. Next, qEEG features within four frequency bands are calculated from this
‘clean’ recording. In particular, 16 features that have been proven discriminant in the detection of TBI3 are used in the
diagnosis step. A classification prediction is performed using discriminant analysis and the model extracted from the
previously recorded data. Detailed information about data processing and classification methods is given in the
following sections for both online and offline operations.
3.2. Continuous EEG acquisition
Clinical best practice recommends the use of at least 60 seconds of artifact-free EEG27,28. This has been confirmed
by a systematic analysis using the EmerEEG system, which has shown that one minute long epochs are sufficient for
obtaining reliable diagnosis results. Files containing segments of one minute EEG data are stored according to the
European Data Format (EDF) standard29 and loaded for processing. The diagnosis process starts when the first
segment of one minute EEG data is collected. In addition to montage verifications, a fault alert mechanism in the
4 Author name / Procedia Computer Science 00 (2016) 000000
processing algorithm detects faults in signals for each channel to ensure that the diagnosis is based on reliable data.
The electrodes are positioned following the 10-20 rule employed in EEG best practice and the montage used here is
linked-ear.
Consider, as an example, the piece of raw EEG data shown in Fig. 2. According to the annotations made by a
clinical specialist, this one minute segment contains eye blinking and electrode movement artifacts. Eye blinking
artifacts mainly appear around 1, 24, 37, 43, and 56 seconds. Electrode movements occur around 2, 12, 40, and 50
seconds. These artifacts have higher amplitude and frequency compared to the brain signals. This points to the need
for a pre-processing method. The next section describes the algorithm employed to remove automatically such
artifacts.
Fig. 2. Example of a one minute segment of raw EEG data.
3.3. EEG Pre-processing
In EEG data, the signal-to-noise ratio is usually low and the noise frequency and amplitude are higher than the
brain signals. On the other hand, since the analysis of the frequency bands in the range between 0 and 30 Hz can be
sufficient for detection of TBI3,4, the recording is first filtered by a low pass filter with high limit of 30 Hz.
The next step is the removal of artifacts, including those resulting from eye blinks, eye movements, electrode
movements, muscle activity, drowsiness, and head movements. The method adopted for performing the automatic
removal of artifact is the Automatic Wavelet Independent Component Analysis (AWICA)25. The flowchart of the
algorithm21,25 is shown in Fig. 3.
The algorithm consists of five steps:
(1) Wavelet component (WC) extraction. Each channel of the filtered recording is divided into four frequency
bands (delta, theta, alpha and beta) using a four-level Discrete Wavelet Transform (DWT). Each band of each channel
is represented using a Wavelet Component.
(2) Critical WC selection. WCs corresponding to artifacts are automatically identified through a quantitative
measure as critical WCs. The selection is based on kurtosis and Renyi’s entropy, which measure randomness and
peakyness of the signals. Given a scalar random variable x, kurtosis is expressed as   
, where is the
nth order central moment of the variable     and is the mean value. The kurtosis values are first
normalized to zero mean and unit variance, and those with values larger than the threshold ±1.5 (the value is found by
trial and errors) are selected as critical WCs. The recommended value of order for Renyi’s entropy is 2. WCs with
entropy larger than the threshold ±1.5 after normalization are also selected as critical WCs. Fig. 4 shows an example
of critical WC selection from the recording shown in Fig. 2. A total of 76 WCs are generated from this 19 electrode
recording, following four frequency bands (delta, theta, alpha, and beta). With a threshold of ±1.5, the red bars in Fig.
4 indicate the identified critical WCs.
(3) Wavelet independent component (WIC) extraction. The selected critical WCs are then passed to ICA to separate
the artifactual WICs. This system adopts the FastICA 2.530 algorithm for Matlab. Fig. 5 shows an example of WICs
obtained after application of ICA on the selected critical WCs. Artifacts with similar pattern as those annotated by the
specialist in Fig. 2 can be identified independently.
Author name / Procedia Computer Science 00 (2016) 000000 5
Fig. 3. AWICA artifacts removing algorithm (adapted2 1,25)
(4) Artifactual WIC selection. This step concentrates on removing one or more WICs remained after using ICA.
The selection of these artifactual WICs is also based on kurtosis and entropy. The difference in this step is that the
WIC dataset is first divided into 0.5s non-overlapping windows (trials), and then the kurtosis and entropy are
calculated based on these trials. WICs with more than 20% of the trials kurtosis and entropy above the threshold ±1.5
are rejected.
(5) Reconstruction. The remaining WICs are then used to project back artifact-free WCs with an inverse ICA and
combined with the non-critical WCs. The result is WCs cleaned from artifacts. Performing an inverse DWT enables
the reconstruction of an artifact-free EEG recording.
Fig. 4. Example of critical WC selection (threshold=±1.5).
6 Author name / Procedia Computer Science 00 (2016) 000000
Fig. 5. Example of EEG data after application of ICA.
3.4. qEEG Feature Extraction
qEEG is a numerical analysis of EEG data using signal analysis techniques such as wavelet analysis and Fourier
analysis. The commonly used features are EEG coherence, phase, power, and amplitude. The features are calculated
based on artifact-free recordings for eight frequency bands: delta (δ, 1 to 4 Hz), theta (θ, 4 to 8 Hz), alpha (α, 8 to 12
Hz), beta (β, 8 to 25 Hz), hi-beta (hi-β, 25 to 30 Hz), beta1 (β1, 12 to 15 Hz), beta2 (β2, 15 to 18 Hz), and beta3 (β3,18
to 25 Hz). The Fast Fourier Transform (FFT) converts signal from the time to the frequency domain:
   (1)
The auto-spectrum AP at frequency f is:
      , (2)
where   and   are the cosine and sine coefficients at frequency for signal . The amplitude A can
then be obtained as the square root of the auto-spectrum:   
. The amplitude asymmetry AA31 for two signals
and is calculated as
   , (3)
where          and     
   
Next, the coherence CO is computed for pair-wise combination of electrodes x and y,
  

 , (4)
The phase difference PH is then computed:
  
 . (5)
These calculations allow the 16 features identified by Thatcher4 as discriminant in TBI diagnosis to be employed
in the algorithm. The labels used in the features follow the 10-20 coding system used in EEG research and practice
that indicate the position of the electrodes on the sculp. The letters F, T, C, P and O stand for frontal, temporal, central,
parietal, and occipital lobes, respectively. Even numbers refer to electrode positions on the right hemisphere, whereas
odd numbers denote those located on the left hemisphere. The number zero represents an electrode placed on the
midline. The features employed in the research are the those selected by Thatcher3.
Author name / Procedia Computer Science 00 (2016) 000000 7
3.5. Dataset Construction and Model Training
When new field data is recorded, the TBI diagnosis is obtained by performing a classification prediction based on
a comparison of the extracted vector of qEEG features defined above with the trained model. The discriminant analysis
uses two classes: TBI and normal. The model is trained with the same qEEG features, extracted from EEG data
previously recorded in a clinical setting. The dataset is composed of EEG data recorded from 21 electrodes (2
electrodes used as references) at a sample rate of 256 Hz using a BrainMaster device32. Recordings include data with
patients’ eyes open and closed. The recordings are annotated by specialists with labels corresponding to the
International Statistical Classification of Diseases (ICD-10) system33, thereby providing a ground truth for the
classification.
The recordings used for training the model in this study have labels F07.2 and Avrg. The label F07.2 corresponds
to a post-concussional syndrome, i.e., patients diagnosed with TBI. The label Avrg stands for average healthy subjects.
In total, 288 recordings from 14 patients (8 female and 6 male) have been used, including 251 recordings labelled as
F07.2 and 37 regarded as Avrg. After pre-processing and extraction of the discriminant features, the training dataset
was constructed as an Nx16 matrix with N being the total number of clinical samples. A model with good
generalization performance was obtained by splitting this dataset into a training dataset for model training and a
validation dataset for evaluating the model.
As mentioned, the proposed method builds the model by performing a discriminant analysis. The relation between
the selected qEEG features and the classes is assumed to follow a multivariate normal distribution. The mean of each
feature is calculated for each class. The covariance is also calculated, after first subtracting the mean. Considering a
linear discriminant analysis, the model has the same covariance for each class, only the means vary. No prior
probabilities or costs are used to compute the model, as the labels define the class to which each sample belongs. The
trained model is then used to predict the classification of newly acquired data. The principle is to find the class with
the highest probability that the new sample belongs to. The obtained classification is then returned for each segment
considering an online situation. This classification of multiple segments allows better precision to be achieved in the
diagnosis.
4. Evaluation
This section evaluates the developed system in terms of its EEG processing and TBI diagnosis.
The proposed algorithm for TBI diagnosis has two main functions: pre-processing of EEG data, including an
automatic artifact removal, and the diagnosis of TBI itself, based on discriminant qEEG features and the comparison
with a model trained on previously collected and annotated data. The performance of the artifact-removing algorithm
has a great impact on the final diagnosis due to the high signal-to-noise ratio. Thus, in this evaluation, the performance
of the pre-processing algorithm is presented first, followed by the performance of the classification algorithm for TBI
diagnosis.
4.1. Evaluation of the pre-processing method
For the purpose of evaluating the performance of the pre-processing algorithm in removing different types of
artifacts, 20 randomly selected recordings were first visually inspected by a highly qualified specialist, who annotated
each artifact with its type. In total, 225 artifacts of 6 different types were found by the specialist (Table I); these include
eye blinking (138 instances), eye movement (48), electrode movement (14), muscle activity (3), drowsiness (3), and
head movement (19). The dataset is representative as the total number of annotated eye blinks in these recordings is
more than half of all marked artifacts, while the number of muscle activities and drowsiness periods is very low as
these are easier to control and eliminate during the recordings.
Table 1. Artifact Annotation by a specialist and percentage of removal by the pre-processing algorithm
Type of artifact
Eye Blinking
Eye Movement
Electrode Movement
Muscle Activity
Drowsiness
Head Movement
Total
Total
138
48
14
3
3
19
225
Removal %
84.8%
83.3%
85.7%
0%
66.7%
36.8%
79.1%
8 Author name / Procedia Computer Science 00 (2016) 000000
The artifact-removal algorithm was applied to the same 20 recordings. Table I shows the results. The results show
that 79.1% of the annotated artifacts have been successfully removed. Most of the eye blinks (84.8%), eye movements
(83.3%), and electrode movements (85.7%) have been eliminated. These three groups represent the majority of
artifacts in the dataset (88.88%). The percentage of successful removal of artifacts related to drowsiness and head
movement is lower, at 66.7% and 36.8%, respectively. The algorithm has failed to remove any artifacts related to
muscle activity; experiments with more data is needed to improve this parameter.
Fig. 6 shows signals obtained after pre-processing the raw segments shown in Fig. 2. The comparison between Fig.
6A, B and C shows that the success rate in removing artifacts depends on the threshold value. Small threshold values
may result in rejection of some elements of the brain signals (see Fig. 6A, threshold ±1) while high threshold values
may result in low success rate in artifact removal (Fig. 6C, threshold ±2). Fig. 6B is obtained with the optimal threshold
of ±1.5.
Fig. 6. Artifact-free epochs: A, threshold = ±1; B, threshold=±1.5; C, threshold = ±2
5. Conclusion
This paper proposes a method for an automatic early diagnosis of TBI in emergency situations. The system is based
on state-of-the-art standards and advanced technologies for processing and intelligent diagnosis. The system has been
specifically developed in response to real needs: fast and reliable assessment of possible brain injury where the
accident occurred.
The development of the automatic TBI diagnosis algorithm is based on advanced EEG signal processing and
machine learning techniques. The pre-processing step of the algorithm enables the automatic removal of artifacts and
noise, avoiding the need for a time-consuming manual inspection and removal of data segments. The diagnosis is
computed using supervised machine learning based on clinical data. The system operator is then provided with an
assessment of the possible patient’s traumatic brain injury.
The evaluation of the proposed algorithms has shown it to be fast and reliable, with a good generalization
performance of the model. The result of the automatic diagnosis, coupled with a decision support within the EmerEEG
system, provides the operator with an effective basis for the early application of an adapted treatment in an emergency
situation.
Currently, the data stream from the head device is simulated with previously recorded data. The actual testing with
humans is beyond the scope of this project. Future work includes integration of the head device with the portable
system and clinical evaluations once medical approval is obtained.
Acknowledgement
The authors thank the European Commission for funding this research (Grant 605103, FP7-SME-2013). They also
thank their partners Maytec, neuroConn, and University Hospital Gottingen from Germany, Primasil, UK and Tallinn University,
Estonia.
Author name / Procedia Computer Science 00 (2016) 000000 9
References
1. Saatman K. E., Duhaime A., Bullock R., Maas A. I., Valadka A., Manley G., Classification of traumatic brain injury for targeted therapies,
Neurotraum, vol. 25, no. 7, pp. 719-738, Nov. 2010.
2. Rimel R. W., Giordani B., Barth J. T., Boll T. J., Jane J. A., Disability caused by minor head injury, Neurosurgery, vol. 9, no. 3, pp. 221-228,
Sep. 1981.
3. Thatcher R. W., Walker R. A., Gerson I., Geisler F. H., EEG discriminant analyses of mild head trauma, Electroenc. Clin. Neuro., vol. 73, no.
2, pp. 94-106, Aug. 1989.
4. Thatcher R. W., North D. M., Curtin R. T., Walker R. A., Biver C. J., Gomez J. F., Salazar A. M., An EEG severity index of traumatic brain
injury, J. Neuropsych. Clin. N., vol. 13, no. 1, pp. 77-87, Feb. 2001.
5. Injury prevention and control: Traumatic brain injury - TBI data & statistics, Centers for Disease Control and Prevention, Available:
http://www.cdc.gov/traumaticbraininjury/data/index.html. [Accessed 21 July 2015].
6. Tagliaferri F., Compagnone C., Korsic M., Servadei F., Kraus J., A systematic review of brain injury epidemiology in Europe, Acta Neurochir.,
vol. 148, no. 3, pp. 255-268, Mar. 2006.
7. Gustavsson A., Svensson M., Jacobi F., Allgulander C., Alonso J., Beghi E., Dodel R., Ekman M., Faravelli C., Fratiglioni L., Gannon B., Jones
D. H., Jennum P., Jordanova A., Cost of disorders of the brain in Europe 2010, Eur. Neuropsychopharmacol., vol. 21, no. 10, pp. 718-779, Oct.
2011.
8. Jennett G. T. B., Assessment of coma and impaired consciousness: a practical scale, Lancet, vol. 304, no. 7872, pp. 81-84, Jul. 1974.
9. Pal J., Brown R., Fleiszer D., The value of the Glasgow Coma Scale and Injury Severity Score: predicting outcome in multiple trauma patients
with head injury, J. Trauma., vol. 29, no. 6, pp. 746-748, Jun. 1989.
10. Bigler E. D., Maxwell W. L., Neuroimaging and neuropathology of TBI, NeuroRehabilitation, vol. 28, no. 2, pp. 63-74, 2011.
11. Bardin J. C., Fins J. J., Katz D. I., Hersh J., Heier L. A., Tabelow K., Dyke J. P., Ballon D. J., Schiff N. D., Voss H. U., Dissociations between
behavioural and functional magnetic resonance imaging-based evaluations of cognitive function after brain injury, Brain, vol. 134, no. 3, pp. 769-
782, Feb. 2011.
12. Lee B., Newberg A., Neuroimaging in traumatic brain imaging, NeuroRx, vol. 2, no. 2, pp. 372-383, Apr. 2005.
13. Thatcher R. W., Electroencephalography and mild traumatic brain injury, in Foundations of Sport-Related Brain Injuries, Springer, 2006, pp.
241-265.
14. Thatcher R. W., Cantor D. S., McAlaster R., Geisler F., Krause P., Comprehensive Predictions of Outcome in Closed Head-Injured Patients,.
Ann. NY Acad. of Sci., vol. 620, no. 1, pp. 82-101, Apr. 1991.
15. Winkler I., Haufe S., Tangermann M., Automatic classification of artifactual ICA-components for artifact removal in EEG signals, Behavi.
Brain Func., vol. 7, no. 1, p. 30, Aug. 2011.
16. Anderer P., Roberts S., Schlogl A., Gruber G., Klosch G., Herrmann W., Rappelsberger P., Filz O., Barbanoj M. J., Dorffner G., Saletu B.,
Artifact processing in computerized analysis of sleep EEG - a review, Neuropsychobiology, vol. 40, no. 3, pp. 150-157, Sep. 1999.
17. Fatourechi M., Bashashati A., Ward R. K., Birch G. E., EMG and EOG artefacts in brain computer interface systems: A survey, Clin.
Neurophysiol., vol. 118, no. 3, pp. 480-494, Mar. 2007.
18. McFarland D. J., McCane L. M., David S. V., Wolpaw J. R., Spatial filter selection for EEG-based communication, Electroen. Clin. Neuro.,
vol. 103, no. 3, pp. 386-394, Sep. 1997.
19. Hyvarinen A., Oja E., Independent component analysis: algorithms and applications, Neural networks, vol. 13, no. 4, pp. 411-430, Jun. 2000.
20. Jung T.-P., Makeig S., Humphries C., Lee T.-W., Mckeown M. J., Iragui V., Sejnowski T. J., Removing electroencephalographic artifacts by
blind source separation, Psychophysiology, vol. 37, no. 2, pp. 163-178, Mar. 2000.
21. Viola F. C., Thorne J., Edmonds B., Schneider T. E. T., Debener S., Semi-automatic identification of independent components representing
EEG artifact, Clin. Neurophysiol., vol. 120, no. 5, pp. 868-877, May 2009.
22. Mognon A., Jovicich J., Bruzzone L., Buiatti M., ADJUST: An automatic EEG artifact detector based on the joint use of spatial and temporal
features, Psychophysiology, vol. 48, no. 2, pp. 229-240, Feb. 2011.
23. LeVan P., Urrestarazu E., Gotman J., A system for automatic artifact removal in ictal scalp EEG based on independent component analysis and
Bayesian classification, Clin. Neurophysiol., vol. 117, no. 4, pp. 912-927, Apr. 2006.
24. Halder S., Bensch M., Mellinger J., Bogdan M., Kubler A., Birbaumer N., Rosenstiel W., Online artifact removal for brain-computer interfaces
using support vector machines and blind source separation, Comput. Intell. Neurosci., Apr. 2007.
25. Mammone N., La Foresta F., Morabito F. C., Automatic artifact rejection from multichannel scalp EEG by wavelet ICA, IEEE Sensors Journal,
vol. 12, no. 3, pp. 533-542, Feb. 2012.
26. Castellanos N. P., Makarov V. A., Recovering EEG brain signals: artifact suppression with wavelet enhanced independent component analysis,
J. Neurosci. Meth., vol. 158, no. 2, pp. 300-312, Dec. 2006.
27. Duffy F. H., Hughes J. R., Miranda F., Bernad P., Cook P., Status of quantitative EEG (QEEG) in clinical practice, Clin. EEG Neurosci., vol.
25, no. 4, pp. 6-22, Oct. 1994.
28. Hughes J. R., John E. R., Conventional and quantitative electroencephalography in psychiatry, J. Neuropsych. Clin. N., vol. 11, no. 2, pp. 190-
208, May 1999.
29. European Data Format, [Online]. Available: http://www.edfplus.info/. [Accessed 21 July. 2015].
30. Gavert H., Hurri J., Sarela J., Hyvarinen A., Fastica 2.5, [Online]. Available: http://research.ics.aalto.fi/ica/fastica/. [Accessed 12 Jan. 2015].
31. Neuroguide help manual, Applied Neuroscience, [Online]. Available: http://www. appliedneuroscience.com/Tutorials.htm. [Accessed 12 Jan.
2015].
32. Brainmaster Technologies, Inc., [Online]. Available: http://www.brainmaster.com/. [Accessed 21 July 2015].
33. International statistical classification of diseases and related health problems 10th revision, World Health Organization, [Online]. Available:
http://apps.who.int/classifications/icd10/browse/2015/en. [Accessed Jan. 2015]
... Early identification of mTBI via AI-enabled systems falls mainly into two categories: input data from magnetic resonance imaging (MRI) versus electroencephalogram (EEG) [79][80][81][82]. Advantages of the MRI include multiple scan types, consistent data without interference, high spatial resolution, and the ability to map complex relationships between regions in the brain [79,80]. ...
Chapter
Full-text available
Artificial intelligence (AI) is rapidly being incorporated into many facets of medicine and surgery. This includes novel approaches utilizing machine learning (ML) in the management of injury, hemodynamic shock, and a range of military/battlefield/triage applications. In general, military-based medical systems are functionally similar to civilian equivalents domestically, especially when it comes to peacetime operations. Although there are also some similarities between military medicine and surgery during active engagements and high-volume penetrating trauma centers at surge capacity, the intensity and severity of injury are almost universally greater in the military-conflict setting. Given significant developments in the area of AI/ML in general, and in the prehospital setting in particular, benefits derived from existing AI/ML research and implementations should be translatable to the military setting (and vice versa). This chapter will address various niche medical and surgical needs applicable to both peacetime and active combat scenarios within the general sphere of military medicine and surgery. We will focus on various innovative and creative solutions and implementations utilizing a scoping literature review approach to evaluate the current state of AI/ML technology applications relevant to battlefield and battlefield-adjacent medical scenarios. We will also attempt to identify research gaps and possible avenues of moving forward.
... Mobile EEG and MEG technology can also be used to monitor concussion and traumatic brain injury, an area where early detection is crucial for effective treatment and prevention of long-term effects (Albert et al., 2016;Edlow et al., 2017). Concretely, this technology could be used systematically to monitor athletes during games or practices to detect signs of concussion or traumatic brain injury in real time, allowing for immediate intervention and treatment and thus reducing the risk of long-term effects. ...
Article
Full-text available
Mobile electroencephalography and magnetoencephalography technology have the potential to revolutionize the study of motor expertise by providing real-time brain activity data in a noninvasive and portable manner. In the context of sports, these recording techniques have already been used in various applications such as mental fatigue monitoring, concussion assessment, and even talent identification. Here, we discuss the potential for mobile technology to facilitate precise characterization of brain dynamics and outline a number of challenges for the use of portable technology in this context. Specifically, we argue that mobile brain recordings cannot only improve our understanding of motor activities, athletic performance, and athletes’ individual differences but also provide an opportunity for researchers to exploit the richness and uniqueness of sports environments as a tool to better understand the brain. We close with a discussion of the promise of this body of work for future research in sports and exercise neuroscience.
... Moreover, the detection of unusual health conditions is possible through assessing their EEG. However, the availability of EEG pathology datasets will lead to a challenge as some of these can be accessed online, but a vast majority is small and unsuitable for some deep learning models [23][24][25][26]. ...
Article
Full-text available
In machine learning, deep learning is the most popular topic having a wide range of applications such as computer vision, natural language processing, speech recognition, visual object detection, disease prediction, drug discovery, bioinformatics, biomedicine, etc. Of these applications, health care and medical science-related applications are dramatically on the rise. The tremendous big data growth, the Internet of Things (IoT), connected devices, and high-performance computers utilizing GPUs and TPUs are the main reasons why deep learning is so popular. Based on their specific tasks, medical IoT, digital images, electronic health record (EHR) data, genomic data, and central medical databases are the primary data sources for deep learning systems. Several potential issues such as privacy, QoS optimization, and deployment indicate the pivotal part of deep learning. In this paper, deep learning for IoT applications in health care systems is reviewed based on the Systematic Literature Review (SLR). This paper investigates the related researches, selected from among 44 published research papers, conducted within a period of ten years – 2010 to 2020. Firstly, theoretical concepts and ideas of deep learning and technical taxonomy are proposed. Afterwards, major deep learning applications for IoT in health care and medical sciences are presented through analyzing the related works. Later, the main idea, advantages, disadvantages, and limitations of each study are discussed, preceding suggestions for further research.
... Recently, considerable efforts have been made to study and quantify automatic EEG diagnostics (for a review, see Faust et al., 2018;Miotto et al., 2017;Roy et al., 2019). Some studies focused on decoding of specific diseases and disorders, such as epilepsy and epileptic seizures (Aslan et al., 2008;Hügle et al., 2018;Subasi et al., 2019), Alzheimer's disease (Lehmann et al., 2007), Parkinson's disease (Chaturvedi et al., 2017), disorders of consciousness (Engemann et al., 2018;Sun et al., 2019), depression (Cai et al., 2016;Hosseinifard et al., 2013), traumatic brain injuries (Albert et al., 2016;Vivaldi et al., 2021), stroke (Giri et al., 2016), or alcoholism (Bajaj et al., 2017). Other research focused on decoding general EEG pathology (Gemein et al., 2020;Khan et al., 2022;López de Diego, 2017;Roy et al., 2019;Schirrmeister et al., 2017;Van Leeuwen et al., 2019;Western et al., 2021). ...
Article
Full-text available
Automated clinical EEG analysis using machine learning (ML) methods is a growing EEG research area. Previous studies on binary EEG pathology decoding have mainly used the Temple University Hospital (TUH) Abnormal EEG Corpus (TUAB) which contains approximately 3,000 manually labelled EEG recordings. To evaluate and eventually even improve the generalisation performance of machine learning methods for EEG pathology, decoding larger, publicly available datasets is required. A number of studies addressed the automatic labelling of large open-source datasets as an approach to create new datasets for EEG pathology decoding, but little is known about the extent to which training on larger, automatically labelled dataset affects decoding performances of established deep neural networks. In this study, we automatically created additional pathology labels for the Temple University Hospital (TUH) EEG Corpus (TUEG) based on the medical reports using a rule-based text classifier. We generated a dataset of 15,300 newly labelled recordings, which we call the TUH Abnormal Expansion EEG Corpus (TUABEX), and which is five times larger than the TUAB. Since the TUABEX contains more pathological (75%) than non-pathological (25%) recordings, we then selected a balanced subset of 8,879 recordings, the TUH Abnormal Expansion Balanced EEG Corpus (TUABEXB). To investigate how training on a larger, automatically labelled dataset affects the decoding performance of deep neural networks, we applied four established deep convolutional neural networks (ConvNets) to the task of pathological versus non-pathological classification and compared the performance of each architecture after training on different datasets. The results show that training on the automatically labelled TUABEXB dataset rather than training on the manually labelled TUAB dataset increases accuracies on TUABEXB and even for TUAB itself for some architectures. We argue that automatically labelling of large open-source datasets can be used to efficiently utilise the massive amount of EEG data stored in clinical archives. We make the proposed TUABEXB available open source and thus offer a new dataset for EEG machine learning research.
... Machine learning models have been largely developed to facilitate the diagnosis, prediction, and treatment of different diseases such as brain disease Albert et al. (2016), kidney disease Ahmad et al. (2017) and diabetic disease Kumar et al. (2014). A couple of researchers such as Khemphila &Boonjing (2011), andAtkov et al. (2012) used ANN for heart disease diagnosis. ...
Article
Full-text available
The impact of digital technology on healthcare delivery services is increasing as new technologies evolve and current technologies expand. These technologies have the potential to provide a platform to reason about the health condition of a patient using relevant contextual information. Context-aware reasoning is particularly important in cardiac health monitoring because of the increasing number of deaths resulting from cardiac diseases. As a result, several efforts have been made to develop intelligent systems for cardiac condition monitoring. Nevertheless, most of the existing systems for cardiac health monitoring are generally based on physiological information, mainly the heart rate or electrocardiogram(ECG) signals, while the few research that does integrate contextual information has not considered the privacy of the patients in the development process. This research proposes a privacy-preserving context-aware framework for cardiac health monitoring using contextual information from the patient's behavior data to facilitate physicians' decision-making. The framework considers the patient's privacy in the architectural design by allowing the user to take control of the data generated from the sensors as information is stored in the user's device and not transferred to any server. Furthermore, the user's privacy is also considered at the algorithm training and model generation stage by adopting a federated machine learning approach. Federated learning allows different clients in different locations to train a global model without sending their dataset to a central server. In addition, the framework addresses the issue of context acquisition by engaging healthcare professionals in the development process. A prototype tagged "mCardiac" is presented as a proof of concept. The design, implementation, and evaluation of mCardiac was made possible by constant interaction with healthcare professionals. mCardiac was also evaluated with cardiac patients who were asked to use the system to validate the effectiveness of the approach.
Chapter
Remote diagnosis and health management are made possible by intelligent healthcare. The advent of deep learning (DL) techniques opens the way to better healthcare by utilizing cutting-edge concepts. DL approaches are widely employed in healthcare frameworks to enable automated processes in modern technology breakthroughs. DL has a proven track record of performance and is essential to logistics process automation and supply optimization. Due to its ability to manage enormous data volumes with little human help, DL was beneficial in medical systems. The current healthcare system heavily utilizes DL to provide patients, physicians, and other healthcare practitioners with better service and more innovative health. Acute illness identification, picture analysis, medication development, biopharmaceuticals, and intelligent surveillance systems have all been demonstrated successful uses for DL. This book chapter offers a cutting-edge perspective on current developments in DL and how they are being implemented in healthcare systems to achieve several objectives. The chapter also addresses difficulties and possibilities in DL, notably in medicine. This will contribute to developing reference data that may be valuable for the eventual use of DL in other sectors of the healthcare infrastructure.
Chapter
Full-text available
This chapter is a review and analysis of quantitative EEG (qEEG) for the evaluation of the locations and extent of injury to the brain following rapid acceleration/deceleration trauma, especially in mild traumatic brain injury (TBI). The earliest use of qEEG was by Hans Berger in 1932 and since this time over 1,600 peer reviewed journal articles have been published in which qEEG was used to evaluate traumatic brain injury. Quantitative EEG is a direct measure of the electrical energies of the brain and network dynamics which are disturbed following a traumatic brain injury. The most consistent findings are: 1- reduced power in the higher frequency bands (8 to 40 Hz) which is linearly related to the magnitude of injury to cortical gray matter, 2- increased slow waves in the delta frequency band (1 to 4 Hz) in the more severe cases of TBI which is linearly related to the magnitude of cerebral white matter injury and, 3- changes in EEG coherence and EEG phase delays which are linearly related to the magnitude of injury to both the gray matter and the white matter, especially in frontal and temporal lobes. A review of qEEG reliability and clinical validation studies showed high predictive and content validity as determined by correlations between qEEG and clinical measures such as neuropsychological test performance, Glasgow Coma Scores, length of coma and MRI biophysical measures. Inexpensive and high speed qEEG NeuroImaging methods were also discussed in which the locations of maximal deviations from normal in 3-dimensions were revealed. Evaluation of the sensitivity and specificity of qEEG with a reduced number of EEG channels offers the feasibility of real-time monitoring of the EEG using Blue Tooth technology inside of a football helmet so that immediate evaluation of the severity and extent of brain injury in athletes can be accomplished. Finally, qEEG biofeedback treatment for the amelioration of complaints and symptoms following TBI is discussed.
Article
Full-text available
Electroencephalographic (EEG) recordings are often contaminated by artifacts, i.e., signals with noncerebral origin that might mimic some cognitive or pathologic activity, this way affecting the clinical interpretation of traces. Artifact rejection is, thus, a key analysis for both visual inspection and digital processing of EEG. Automatic artifact rejection is needed for effective real time inspection because manual rejection is time consuming. In this paper, a novel technique (Automatic Wavelet Independent Component Analysis, AWICA) for automatic EEG artifact removal is presented. Through AWICA we claim to improve the performance and fully automate the process of artifact removal from scalp EEG. AWICA is based on the joint use of the Wavelet Transform and of ICA: it consists of a two-step procedure relying on the concepts of kurtosis and Renyi's entropy. Both synthesized and real EEG data are processed by AWICA and the results achieved were compared to the ones obtained by applying to the same data the “wavelet enhanced” ICA method recently proposed by other authors. Simulations illustrate that AWICA compares favorably to the other technique. The method here proposed is shown to yield improved success in terms of suppression of artifact components while reducing the loss of residual informative data, since the components related to relevant EEG activity are mostly preserved.
Article
Full-text available
Background: The spectrum of disorders of the brain is large, covering hundreds of disorders that are listed in either the mental or neurological disorder chapters of the established international diagnostic classification systems. These disorders have a high prevalence as well as short- and long-term impairments and disabilities. Therefore they are an emotional, financial and social burden to the patients, their families and their social network. In a 2005 landmark study, we estimated for the first time the annual cost of 12 major groups of disorders of the brain in Europe and gave a conservative estimate of €386 billion for the year 2004. This estimate was limited in scope and conservative due to the lack of sufficiently comprehensive epidemiological and/or economic data on several important diagnostic groups. We are now in a position to substantially improve and revise the 2004 estimates. In the present report we cover 19 major groups of disorders, 7 more than previously, of an increased range of age groups and more cost items. We therefore present much improved cost estimates. Our revised estimates also now include the new EU member states, and hence a population of 514 million people. Aims: To estimate the number of persons with defined disorders of the brain in Europe in 2010, the total cost per person related to each disease in terms of direct and indirect costs, and an estimate of the total cost per disorder and country. Methods: The best available estimates of the prevalence and cost per person for 19 groups of disorders of the brain (covering well over 100 specific disorders) were identified via a systematic review of the published literature. Together with the twelve disorders included in 2004, the following range of mental and neurologic groups of disorders is covered: addictive disorders, affective disorders, anxiety disorders, brain tumor, childhood and adolescent disorders (developmental disorders), dementia, eating disorders, epilepsy, mental retardation, migraine, multiple sclerosis, neuromuscular disorders, Parkinson's disease, personality disorders, psychotic disorders, sleep disorders, somatoform disorders, stroke, and traumatic brain injury. Epidemiologic panels were charged to complete the literature review for each disorder in order to estimate the 12-month prevalence, and health economic panels were charged to estimate best cost-estimates. A cost model was developed to combine the epidemiologic and economic data and estimate the total cost of each disorder in each of 30 European countries (EU27+Iceland, Norway and Switzerland). The cost model was populated with national statistics from Eurostat to adjust all costs to 2010 values, converting all local currencies to Euro, imputing costs for countries where no data were available, and aggregating country estimates to purchasing power parity adjusted estimates for the total cost of disorders of the brain in Europe 2010. Results: The total cost of disorders of the brain was estimated at €798 billion in 2010. Direct costs constitute the majority of costs (37% direct healthcare costs and 23% direct non-medical costs) whereas the remaining 40% were indirect costs associated with patients' production losses. On average, the estimated cost per person with a disorder of the brain in Europe ranged between €285 for headache and €30,000 for neuromuscular disorders. The European per capita cost of disorders of the brain was €1550 on average but varied by country. The cost (in billion €PPP 2010) of the disorders of the brain included in this study was as follows: addiction: €65.7; anxiety disorders: €74.4; brain tumor: €5.2; child/adolescent disorders: €21.3; dementia: €105.2; eating disorders: €0.8; epilepsy: €13.8; headache: €43.5; mental retardation: €43.3; mood disorders: €113.4; multiple sclerosis: €14.6; neuromuscular disorders: €7.7; Parkinson's disease: €13.9; personality disorders: €27.3; psychotic disorders: €93.9; sleep disorders: €35.4; somatoform disorder: €21.2; stroke: €64.1; traumatic brain injury: €33.0. It should be noted that the revised estimate of those disorders included in the previous 2004 report constituted €477 billion, by and large confirming our previous study results after considering the inflation and population increase since 2004. Further, our results were consistent with administrative data on the health care expenditure in Europe, and comparable to previous studies on the cost of specific disorders in Europe. Our estimates were lower than comparable estimates from the US. Discussion: This study was based on the best currently available data in Europe and our model enabled extrapolation to countries where no data could be found. Still, the scarcity of data is an important source of uncertainty in our estimates and may imply over- or underestimations in some disorders and countries. Even though this review included many disorders, diagnoses, age groups and cost items that were omitted in 2004, there are still remaining disorders that could not be included due to limitations in the available data. We therefore consider our estimate of the total cost of the disorders of the brain in Europe to be conservative. In terms of the health economic burden outlined in this report, disorders of the brain likely constitute the number one economic challenge for European health care, now and in the future. Data presented in this report should be considered by all stakeholder groups, including policy makers, industry and patient advocacy groups, to reconsider the current science, research and public health agenda and define a coordinated plan of action of various levels to address the associated challenges. Recommendations: Political action is required in light of the present high cost of disorders of the brain. Funding of brain research must be increased; care for patients with brain disorders as well as teaching at medical schools and other health related educations must be quantitatively and qualitatively improved, including psychological treatments. The current move of the pharmaceutical industry away from brain related indications must be halted and reversed. Continued research into the cost of the many disorders not included in the present study is warranted. It is essential that not only the EU but also the national governments forcefully support these initiatives.
Article
Full-text available
Abstract Background Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. Methods We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Results Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (
Article
Full-text available
Neuroimaging at all stages of a traumatic brain injury (TBI) provides information about gross brain pathology. In this review, post-mortem TBI cases are matched to neuroimaging findings from TBI survivors to demonstrate the close correlation between observable pathology with in vivo neuroimaging to the underlying neuropathology. An emphasis of this review focuses on neuroimaging identification of trauma induced cortical and white matter degeneration along with hydrocephalus ex vacuo expansion of the ventricular system as the injured brain exhibits atrophic changes. The role of hippocampal atrophy and thalamic injury along with the vulnerability of the corpus callosum in TBI are also reviewed. The aim of this review is to provide pathological confirmation of observable neuroimaging abnormalities that relate directly to trauma-induced effects of the injury.
Article
A comprehensive diagnostic evaluation was administered to 162 closed head-injured patients within 1 to 21 days (mean, 7.5 days) after injury. Each evaluation consisted of (1) power spectral analyses of electroencephalogram (EEG) recorded from 19 scalp locations referenced to age-matched norms, (2) brainstem auditory evoked potentials, (3) computed tomography (CT)-scan, and (4) Glasgow Coma Score (GCS) at time of admission (GCS-A) and at time of EEG test (GCS-T). Functional outcome at one year following injury was assessed using the Rappaport Disability Rating Scale (DRS), which measures the level of disability in the six diagnostic categories of (1) eye opening, (2) best verbal response, (3) best motor response, (4) self-care ability for feeding, grooming, and toileting, (5) level of cognitive functioning, and (6) employability. The ability of the different diagnostic measures to predict outcome at one year following injury was assessed using stepwise discriminant analyses to identify patients in the extreme outcome categories of complete recovery versus death and multivariate regression analyses to predict patients with intermediate outcome scores. The best combination of predictor variables was EEG and GCS-T, which accounted for 74.6% of the variance in the multivariate regression analysis of intermediate outcome scores and 95.8% discriminant accuracy between good outcome and death. The best single predictors of outcome in both the discriminant analyses and the regression analyses were EEG coherence and phase. A gradient of prognostic strength of diagnostic measures was EEG phase greater than EEG coherence greater than GCS-T greater than CT-scan greater than EEG relative power. The value of EEG coherence and phase in the assessment of diffuse axonal injury was discussed.
Article
Eye movements, eye blinks, cardiac signals, muscle noise, and line noise present serious problems for electroencephalographic (EEG) interpretation and analysis when rejecting contaminated EEG segments results in an unacceptable data loss. Many methods have been proposed to remove artifacts from EEG recordings, especially those arising from eye movements and blinks. Often regression in the time or frequency domain is performed on parallel EEG and electrooculographic (EOG) recordings to derive parameters characterizing the appearance and spread of EOG artifacts in the EEG channels. Because EEG and ocular activity mix bidirectionally, regressing out eye artifacts inevitably involves subtracting relevant EEG signals from each record as well. Regression methods become even more problematic when a good regressing channel is not available for each artifact source, as in the case of muscle artifacts. Use of principal component analysis (PCA) has been proposed to remove eye artifacts from multichannel EEG. However, PCA cannot completely separate eye artifacts from brain signals, especially when they have comparable amplitudes. Here, we propose a new and generally applicable method for removing a wide variety of artifacts from EEG records based on blind source separation by independent component analysis (ICA). Our results on EEG data collected from normal and autistic subjects show that ICA can effectively detect, separate, and remove contamination from a wide variety of artifactual sources in EEG records with results comparing favorably with those obtained using regression and PCA methods. ICA can also be used to analyze blink-related brain activity.