ADHD identification based on a linear projection and clustering

Conference Paper (PDF Available) · December 2011with 24 Reads
Conference: VII Seminario Internacional de Procesamiento y Análisis de Información Médica. SIPAIM 2011, At Bucaramanga, Colombia
Cite this publication
Abstract
Event-related potentials (ERPs) are electrical signals from brain generated as a response to an external sensorial stimulus. This kind of signals are widely used to diagnose neurological disorders, such as Attention-deficit hyperactivity disorder (ADHD). In this paper, a novel methodology for ADHD discrimination is proposed, which consist of obtaining a new data representation by means of a re-characterization of initial feature space. Such re-characterization is done through the distances between data and centroids obtained from k-means algorithm. This methodology also includes pre-clustering and liner projection stages. In addition, this paper explores the use of morphological and spectral features as descriptive patterns of ERP signal in order to discriminate between normal subjects and ADHD patients. Experimental results show that the morphological features, in contrast with the remaining features considered in this study, are those that more contribute to classification performance, reaching 86% for the original feature set.
Figures - uploaded by Diego Peluffo
Author content
All content in this area was uploaded by Diego Peluffo
Content may be subject to copyright.
ADHD identification based on a linear projection and
clustering
P.A. Castro-Cabreraa,
, D.H. Peluffo-Ord´neza, F. Restrepo de Mej´ıab,
C.G. Castellanos-Dom´ıngueza
aUniversidad Nacional de Colombia
Signal Processing and Recognition Group
Manizales, Colombia
bUniversidad Aut´onoma de Manizales
Grupo de Neuro-aprendizaje
Manizales, Colombia
Abstract
Event-related potentials (ERPs) are electrical signals from brain genera-
ted as a response to an external sensorial stimulus. This kind of signals
are widely used to diagnose neurological disorders, such as Attention-deficit
hyperactivity disorder (ADHD).
In this paper, a novel methodology for ADHD discrimination is propo-
sed, which consist of obtaining a new data representation by means of a
re-characterization of initial feature space. Such re-characterization is done
through the distances between data and centroids obtained from k-means
algorithm. This methodology also includes pre-clustering and liner projec-
tion stages. In addition, this paper explores the use of morphological and
spectral features as descriptive patterns of ERP signal in order to discri-
minate between normal subjects and ADHD patients. Experimental results
show that the morphological features, in contrast with the remaining featu-
res considered in this study, are those that more contribute to classification
performance, reaching 86 % for the original feature set.
Keywords: ADHD, ERP, clustering, linear projection
1. Introduction
Attention-deficit hyperactivity disorder (ADHD) is a prevalent disorder diag-
nosed on the basis of persistent and developmentally-inappropriate levels of
pacastroc@unal.edu.co
VII Seminario Internacional de Procesamiento y An´alisis de Im´agenes M´edicas SIPAIM 2011
overactivity, inattention and impulsivity. It is one of the most common psy-
chiatry disorders in childhood [1]. Currently its diagnosis is based on the
clinical criteria of DSM-IV or ICD-10, helped by the conduct outlined in
questionnaires applied to parents and teachers; however, there are not bio-
logical markers or conclusive tests that diagnose this behavioral disorder
with a high reliability [2].
Event-related potentials (ERPs) are brain electrical signals generated as
a response to an external sensorial stimulus. They have been useful in inves-
tigations of perceptual and cognitive-processing deficits, specially in children
with ADHD, given that these potentials are physiologically correlated with
neurocognitive functions. The most popular assessed features on ERPs for
interpretation of cognitive processes are the areas and the peaks of the ERP
components, defined by the mean and peak to peak voltages, respectively,
which are computed in certain windows in the time domain. This parameters
are determined by visual inspection of the averaged ERP waveforms [3].
The ERPs comprise of a number of characteristic peaks and trough which
basic research has shown to correspond to certain underlying processes. P300
component is perhaps the most studied ERP component in investigations of
selective attention and information processing, due partly to its relatively
large amplitude and facile elicitation in experimental contexts [4]. Although
the quantification of ERP components by areas and peaks is the standard
procedure in fundamental ERP research, the conventional approach has two
drawbacks:
Firstly, ERPs are time-varying signals reflecting the sum of underlying
neural events during stimulus processing, operating on different time sca-
les ranging from milliseconds to seconds. Various procedures such as ERP
subtraction or statistical methods have been employed to separate functio-
nally meaningful events that partly or completely overlap in time. However,
the reliable identification of these components in the ERP waveforms still
remains as a problem.
Secondly, analysis in the frequency domain has revealed that EEG/ERP
components in different bands (delta, theta, alpha, beta, gamma) are fun-
ctionally related to information processing and behavior. However, the Fou-
rier transform (FT) of ERP lacks the information about the time localization
of transient neural events. Therefore, efficient algorithms for analyzing a sig-
nal in time-frequency plane are very important in extracting and relating
distinct functional components.
These limitations, as well as the ones related with time–invariant met-
hods, can be solved by using the wavelet formalism. The wavelet transform
2
(WT) is a time-frequency representation, that has an optimal resolution
both in the time and frequency domains and has been successfully applied
to the study of EEG–ERP signals [?][5]. Although ERP feature extraction
from the time–frequency domain based on the discrete WT (DWT) has been
growing increasingly popular, this approach can result unhelpful to patho-
logy detection purposes, particularly, to ADHD identification.
In this paper a novel methodology is proposed that consist of a re-
characterization of initial feature space through the distances between data
and centroids corresponding to clusters obtained with k-means algorithm.
To this end, original data are first selected by means of a pre-clustering and
linearly projected. In addition, this paper explores the use of morphological
and spectral features as descriptive patterns of ERP signal in order to discri-
minate between normal subjects and ADHD patients. Experimental results
show that the morphological features, in contrast with the remaining featu-
res considered in this study, are those that more contribute to classification
performance, reaching 86 % for the original feature set.
2. Theoretical framework
In terms of spectral clustering, in particular, graph-partitioning cluste-
ring, affinity matrix represents the relation degree between observations or
nodes. In other words, affinity measure denotes the association or similitude
degree between two nodes and then it is a non-negative value.
Let A={aij }be the affinity matrix that is composed by all the relations
among nodes, that satisfies the following conditions aij 0 and aij =aji .
Then, matrix Ais symmetric and positive semi-definite.
Let X= [x1,...,xn]Rn×pbe the data matrix where xiis a p-
dimensional vector that corresponds to considered features for i-th subject.
To guarantee the scale coherence in representation of data, matrix Xis nor-
malized , using: xi(xiµ(xi))(xi), where µ(·) and σ(·) are a mean
and a standard deviation operator, respectively.
A trivial form to establish the affinity measures corresponds to A=
XXT. For some clustering methods, this kind of affinity results to be useful
because it contains the inner products between all data points or observa-
tions.
3
2.1. Data truncated-projection
In general, when using spectral techniques the clustering procedure is
carried out in a low dimensional space, named eigen-space [6]. Denoting the
eigen-space as Uthat corresponds to the eigenvectors of Aand its corres-
ponding subspace as
e
URn×m, where m < n, that corresponds to the first
mcolumns of U.
In this work, it is proposed to employ
e
Uas a rotation matrix, but gi-
ven that nis significantly bigger than pthe use of matrix Vthat repre-
sents the eigenvectors of XTXis preferred. This can be done because the
first peigenvalues of XXTcorrespond to the eigenvalues of XTXwhen
||ui|| =||vi|| = 1, i = 1,...,p.In addition, it is easy to prove that there
exists a linear relation between uiand vi, which is: ui=Xvi.
Given this, data linear projection is:
Y=XV (1)
Then, truncated linear projection is obtained as follows:
e
Y=X
e
V(2)
where matrix
e
VRp×q(q < p) is composed by the first qcolumns of V.
2.2. Clustering-based representation
For most spectral clustering approaches, once the new representation
space is obtained, a conventional clustering algorithm is applied to group
homogeneous observations [7]. In this work, a clustering-based representa-
tion is proposed. To this end, a centroid-based clustering is used to obtain
a new data representation Z={zij }, where each observation is represented
by means of its distance with the centroids corresponding to each group,
i.e.,
zij = d(ˆ
yi,qj), i = 1,...,n;j= 1,...,k (3)
where kis the number of groups, qjdenotes the j-th centroid and d(·,·) is
a distance operator.
Centroids are obtained with k-means algorithm [8] and as a distance
measure the euclidian norm is used.
4
2.3. Heuristic search
To improve the performance classification and determine the relevance
features, a feature selection stage is done. In this particular case, a heuristic
search type wrapper, called sequential forward floating selection (sequential
forward floating selection - SFFS) [9]. In this technique, for each stage a new
variable is included using a forward sequential procedure, then less signifi-
cant variables are excluded one at a time until the percentage of accurate
classifications increases. Once this search can no longer continue excluding
variables, another step forward is done to include another variable and if
possible the exclusion of variables procedure is applied again. The process is
iterated until no more steps forward can be done because a high percentage
of classification accuracy has been achieved.
3. Materials and methods
3.1. Data Base
The experiments were carried out with 120 children belonging to edu-
cational institutions of the metropolitan area of the Manizales (60 of the
healthy control group and 60 of the ADHD group). The subjects, with ages
between 4 and 15 years old, were medically diagnosed based on clinical cri-
teria of DSM-IV and minikid criteria by a multidisciplinary specialist team
consisting of a general physician, psychologist, neuropsychologist and ex-
perts in children psychiatric disorders. Both groups were tested under the
same lighting and noise conditions, and were defined by the following in-
clusion criteria: non abnormality physical examination, normal visual and
hearing ability, intellectual coefficient greater than 80 and, if necessary, phar-
macologic management previously suspended. Subjects were verified to be
free of some evidence of other neurological disorder.
Recordings were acquired by means of electrodes located in the head mid-
line (Fz, Cz, Pz) according to 10 20 international system, with a sampling
frequency of 640 samples per second. Signals acquisition took 1 s before and
after stimulus presentation. The evaluation protocol applied was the oddball
paradigm in auditory and visual modalities. The first procedure involves the
emission of 80 dB tone lasting 50 ms, with a frequency of 1,000 Hz for fre-
quent stimulus and 3,000 Hz for target stimulus, presented randomly every
1,5 s. In the visual modality of the test, the sub ject is asked to watch a
monitor placed 1 m away that shows an image with a consistent pattern (a
checkerboard of 16 squares), which is the frequent stimulus. The rare stimu-
lus is the presentation of a target in the center of the screen with the same
5
common pattern in the background; the subject must press a button each
time the unusual stimulus appears. The experiment consists of 200 stimuli,
of which 80% are non-target and 20 % remaining are target stimuli.
3.2. Experimental setup
Methodology applied in the experiments can be seen graphically in the
block diagram shown in figure 3.2.
ERP signals
Visualmodality
Auditorymodality
Preprocessing Characterization
morphological,spectral
andwaveletfeatures
Dataprojection
PCA
Characterization
bydistances
Clustering
SFFS
Heuristicsearch
Classification
Figura 1: Proposed methodology for ERP signal analysis
Proposed methodology consist of some procedures described below. Da-
tabase counts with 6 recordings per patient, corresponding to acquisitions
of Fz, Cz and Pz electrodes in the auditory and visual modalities. In this
work, it is reported only the results of Pz auditory record, since this is the
location of the scalp where the generators of the ERP components act more
clearly.
Data matrix Xis defined as suggested in [10], which consists of three
groups of features of different nature: The first group comprises 17 morp-
hological features, which consist of parameters measured over the whole
signal and are related to its shape. This set is formed by the following
characteristics: latency, amplitude, latency/amplitude ratio, absolute am-
plitude, absolute latency/amplitude ratio, positive area, negative area, total
area, absolute total area, total absolute area, average absolute signal slope,
peak-peak value, peak-peak value in a time window, peak-peak slope, zero
crossings, zero crossings density and slope sign alterations.
The second set of features is defined by three frequency characteristics:
mode frequency, median frequency and mean frequency, which are calculated
as described in [10]. Using the discrete wavelet transform, we obtain the third
set of characteristics, which corresponds to the wavelet coefficients from the
previous levels of decomposition.
After characterizing, the corresponding processing is performed on the
matrix Xusing the following procedure: centralization and standardization
of data, outlier detection and verification of univariate gaussivity. In addi-
tion to the above procedure, it is performed a data pre-clustering with the
6
methodology used in [11], thus ensuring consistency in data and facilitate
the analysis.
Now, the projection of the data is done by using the technique explained
in Section 2.1. The criterion used to determine qis an accumulated variance
value greater than 90 %
Subsequently, data representation is redefined through the distance bet-
ween the data and the centroids of the formed groups by applying the clus-
tering technique. To this end, it has been implemented the traditional algo-
rithm of k-means and used Euclidean distance as a dissimilarity measure.
Once projected data are re-characterized calculating the distances bet-
ween the data and the centroids, a heuristic search algorithm is applied. In
this case, a sequential floating forward selection (SFFS) is considered. This
is done in order to perform a supervised reduction that may lead to find the
smallest number of features that allow sufficiently data classification. The
implemented SFFS algorithm utilizes as a classification assessment function
a Bayesian classifier, since each probability density function is modeled as
Gaussian. In addition, the method was improved by a hypothesis test (t-test)
and an evaluation of information loss stage [12].
The following algorithm describes mathematically and sequentially the
stages of the proposed methodology previously explained.
Algorithm 1 Re-characterization of ERP signals through dissimilarity mea-
sures
Input: Xn×p.
1. A pre-clustering stage is applied over data matrix: ˆ
X= preclustering{X}, where
matrix ˆ
Xis h×pdimensional and h < n.
2. Estimate the covariance matrix ΣX.
3. Compute the eigenvalues Λ= diag(λ1, λ2,...,λp)and eigenvectors V=
[v1|...|vp]of ΣXdecreasingly organized, λ1λ2...λp.
4. Determine the value of q(q < p) through an accumulated variance, greater than
90 %
5. Obtain the truncated linear projection: ˆ
YRh×q=ˆ
Xˆ
V.
6. Cluster data and obtain final centroids: Q= [qT
1|···|qT
p] = kmedias( ˆ
Y)
7. Re-characterize projected data: BRh×k={bij}={d( ˆ
yi,qj)}, i =
1,...,n;j= 1,...,k,d(·,·)is the euclidian distance,
8. ˆ
BRh×m= SFFS{B},mis the number of relevant variables, m < k
9. Test: Classifiers k-nn, LDC and SVM (70 % for training and 30 % for test).
ˆ
B={effective feature set}
7
4. Results and discussions
The acquisition of recordings was carried out in the auditory and visual
modalities through electrodes placed at positions Fz, Cz and Pz, as explai-
ned in Section 3.1. In this work, it is reported only the results of the Pz
position in the auditory modality, since it is the region where the ERP sig-
nal generators yield event-related potentials with component more defined
and greater amplitude.
Data matrix Xhas been made up for 16 morphological and 3 spectral
features, and 32 wavelet coefficients. To calculate the wavelet features, the
records must be resampled to 1024 Hz and discrete wavelet transform was
used with a biorthogonal spline as a wavelet function, and 3 vanishing mo-
ments. For this work, a decomposition of 7 levels was applies, in order to
approximately adjust the frequency band levels into the brain rhythms such
as delta (0,2 to 3,5 Hz), theta (3,5 to 7,5 Hz) , alpha (7,5 to 13 Hz) and beta
(13 to 28 Hz). From 7 obtained decomposition levels, approximation coeffi-
cients of level 7 and details coefficients of levels 7, 6 and 5 were selected as
characteristic wavelet. To justify the selection of these coefficients was used
a criterion of informativeness based on accumulated Shannon entropy [13]
with a threshold greater than 60 %.
To carry out the classification tasks, it was used three different classifiers:
ak-NN, a linear discriminant (LDC), and a support vector machine (SVM),
in order to compare the performance of them and select the one that offers
higher performance classification. In validation step was used a partition
of 70 % for the training group and 30 % for the test group. The testings
produced the following result:
Figure 4 displays the performance of a k-NN classifier in continuous
repetition to show the stability of the proposed methodology. It can be
observed, that all values of the classification performance is above 80 % and
maintain an acceptable standard deviation.
Figure 4 shows the performance obtained by the feature subsets obtained
after selection algorithm SFFS, namely: 1.Performance for the first selected
characteristic, 2.Performance for the subset formed by the first and second
selected features, and so on.
Table 1 shows the accuracy, specificity and sensitivity of each group of
features. In table can be seen that from original set of features X, the morp-
hological characteristics are the major contributors in the performance of
the classifier. This same condition is also evident in percentages achieved by
this subset of features for the sensitivity and specificity.
8
1 2 3 4 5 6 7 8 9 10
0
10
20
30
40
50
60
70
80
90
100
Iterations
classification Performance
Figura 2: Stability of methodology with respect to iterations
SubsetsofFeatures
ClassificationPerformance
Figura 3: Classification performance for each feature subset
Feature Accuracy ( %) Sp ecificity ( %) Sensitivity( %)
Morphological 85,35 ±3,9 85,00 85,83
Spectral 63,92 ±8,6 73,12 51,66
Wavelet 67,85 ±8,5 73,12 60,83
Cuadro 1: Classification performance for each group of features
9
Table 2 relates the classification accuracy of three classifiers mentioned
above. It is noted that a simple parametric classifier as the k-NN can achieve
an optimum performance. Moreover, it is observed that the highest rates of
specificity and sensitivity were achieved also with the k-NN.
Classifier Accuracy ( %) Specificity ( %) Sensitivity( %)
k-NN 86,07 ±3,5 85,00 87,50
LDC 73,92 ±7,1 82,5 62,50
SVM 78,57 ±4,7 81,25 75,00
Cuadro 2: Classification performance for each group of features
Table 3 shows the classification rates achieved without preclustering over
Xapplied in preprocessing stage. In comparison with table 2, there is a
considerable drop in the percentage of classification, which can be attributed
to the presence of outliers or heterogeneous data.
Classifier Accuracy ( %) Specificity ( %) Sensitivity( %)
k-NN 56,04 ±5,6 62,91 49,16
LDC 48,54 ±6,5 42,08 55,00
SVM 48,33 ±4,2 24,58 72,08
Cuadro 3: Classification performance for the proposed methodology
Given the low classification accuracy obtained in this test, it has been
proved the need to make a preclusting in preprocessing. These results also
show the low reliability of the labels given by medical specialists.
5. Conclusion
Because of the nature of ERP signals and the low reliability of labeling
given by specialists, the identification of ADHD represents a difficult task
for both medicine and pattern recognition. The design of classification sys-
tems for discrimination between ADHD and normal signals require a new
data representation since signal samples cannot be enough to obtain a good
classes separability.
In this work, to try overcoming this problem, a methodology of data
re-characterization is proposed. This methodology is mainly composed by
two stages: truncated liner projection and clustering-based characterization.
They are done to achieve a good data representation in terms of compactness
and the distance-based representation from obtained clusters that improves
the classification performance, respectively. These two stages are comple-
mentary and coherent at each other.
10
In addition, a pre-clustering stage was introduced, which allows that the
classifiers perform better because supposed outlier observations are discar-
ded.
Acknowledgment
Authors would like to thank COLCIENCIAS and Universidad Nacional de
Colombia for the financial support on the projects “Identificacin Automtica
del Trastorno por Dficit de Atencin y /o Hiperactividad sobre registros de
Potenciales Evocados Cognitivos” and “Sistema de Diagnstico Asistido para
la Identificacin de TDAH sobre Registros de Potenciales Evocados Cogniti-
vos”, respectively.
Referencias
[1] M. A. Idiazbal, A. Palencia-Taboada, J. Sangorrn, J. Espadaler-
Gamissans, Potenciales evocados cognitivos en el trastorno por dficit
de atencin con hiperactividad, Rev Neurol 34 (2002) 301–305.
[2] R. A. Barkley, Attention deficit hyperactivity disorder: a handbook for
diagnosis and treatment, Guilford Press, New York, 3rd Edition (2005).
[3] V. Bostanov, Data sets Ib and IIb: Feature extraction from eventrelated
brain potentials with the continuous wavelet transform and the t-value
scalogram, IEEE Transactions on Biomedical Engineering 51 (6) (2004)
10571061.
[4] S. H. Patel, P. N. Azzam, Characterization of n200 and p300: Selected
studies of the event-related potentials, International Journal of Medical
Sciences 2 (4) (2005) 147–154.
[5] I. Kalatzis, N. Piliouras, E. Ventouras, C. Papageorgiou, A. Rabavilas,
D. Cavouras, Design and implementation of an SVM-based computer
classification system for discriminating depressive patients from healthy
controls using the P600 component of ERP signals, Computer Methods
and Programs in Biomedicine 75 (2004) 11–22.
[6] Y. S. X., S. Jianbo, Multiclass spectral clustering, in: ICCV ’03: Procee-
dings of the Ninth IEEE International Conference on Computer Vision,
IEEE Computer Society, Washington, DC, USA, 2003, p. 313.
11
[7] L. Zelnik-manor, P. Perona, Self-tuning spectral clustering, in: Advan-
ces in Neural Information Processing Systems 17, MIT Press, 2004, pp.
1601–1608.
[8] P. Hansen, N. Mladenovic, E. Commerciales, J-means: A new local
search heuristic for minimum sum-of-squares clustering, Pattern Re-
cognition 34 (2) (2001) 405–413.
[9] E. Delgado-Trejos, A. Perera-Lluna, M. Vallverd-Ferrer, P. Caminal-
Magrans, G. Castellanos-Domnguez, Dimensionality reduction oriented
toward the feature visualization for ischemia detection, in: IEEE Tran-
sactions on Information Technology in Biomedicine VOL. 13, NO. 4,
JULY 2009, Vol. 13, 2009.
[10] V. Abootalebi, M. H. Moradi, M. A. Khalilzadeh, A new approach for
EEG feature extraction in P300-based lie detection, Computer methods
and programs in biomedicine 94 (2009) 4857.
[11] S. Murillo-Rendn, G. castellanos Domnguez, Construccin, limpieza y
depuracin previa al anlisis estadstico de bases de datos., in: XV SIM-
POSIO DE TRATAMIENTO DE SENALES, IMAGENES Y VISI ON
ARTIFICIAL - STSIVA 2010, 2010.
[12] D. Ververidis, C. Kotropoulos, Fast and accurate sequential floating
forward feature selection with the bayes classifier applied to speech
emotion recognition, Signal Processing 88 (12) (2008) 2956–2970.
[13] R. Coifman, M. Wickerhauser, Entropy-based algorithms for best basis
selection, Information Theory, IEEE Transactions on 38 (2) (1992) 713
–718. doi:10.1109/18.119732.
12
This research hasn't been cited in any other publications.
  • Book
    Recent years have seen tremendous advances in understanding and treating Attention-Deficit/Hyperactivity Disorder (ADHD). Now in a revised and expanded third edition, this authoritative handbook brings the field up to date with current, practical information on nearly every aspect of the disorder. Drawing on his own and others' ongoing, influential research - and the wisdom gleaned from decades of front-line clinical experience - Russell A. Barkley provides insights and tools for professionals working with children, adolescents, or adults. Part I presents foundational knowledge about the nature and developmental course of ADHD and its neurological, genetic, and environmental underpinnings. The symptoms and subtypes of the disorder are discussed, as are associated cognitive and developmental challenges and psychiatric comorbidities. In Parts II and III, Barkley is joined by other leading experts who offer state-of-the-art guidelines for clinical management. Assessment instruments and procedures are described in detail, with expanded coverage of adult assessment. Treatment chapters then review the full array of available approaches - parent training programs, family-focused intervention for teens, school- and classroom-based approaches, psychological counseling, and pharmacotherapy - integrating findings from hundreds of new studies. The volume also addresses such developments as once-daily sustained delivery systems for stimulant medications and a new medication, atomoxetine. Of special note, a new chapter has been added on combined therapies. Chapters in the third edition now conclude with user-friendly Key Clinical Points. This comprehensive volume is intended for a broad range of professionals, including child and adult clinical psychologists and psychiatrists, school psychologists, and pediatricians. It serves as a scholarly yet accessible text for graduate-level courses. Note: Practitioners wishing to implement the assessment and treatment recommendations in the Handbook are advised to purchase the companion Workbook, which contains a complete set of forms, questionnaires, and handouts, in a large-size format with permission to photocopy. (PsycINFO Database Record (c) 2012 APA, all rights reserved)(jacket)
  • Article
    Full-text available
    An effective data representation methodology on high-dimension feature spaces is presented, which allows a better interpretation of subjacent physiological phenomena (namely, cardiac behavior related to cardiovascular diseases), and is based on search criteria over a feature set resulting in an increase in the detection capability of ischemic pathologies, but also connecting these features with the physiologic representation of the ECG. The proposed dimension reduction scheme consists of three levels: projection, interpretation, and visualization. First, a hybrid algorithm is described that projects the multidimensional data to a lower dimension space, gathering the features that contribute similarly in the meaning of the covariance reconstruction in order to find information of clinical relevance over the initial training space. Next, an algorithm of variable selection is provided that further reduces the dimension, taking into account only the variables that offer greater class separability, and finally, the selected feature set is projected to a 2-D space in order to verify the performance of the suggested dimension reduction algorithm in terms of the discrimination capability for ischemia detection. The ECG recordings used in this study are from the European ST-T database and from the Universidad Nacional de Colombia database. In both cases, over 99% feature reduction was obtained, and classification precision was over 99% using a five-nearest-neighbor classifier (5-NN).
  • Article
    This paper addresses subset feature selection performed by the sequential floating forward selection (SFFS). The criterion employed in SFFS is the correct classification rate of the Bayes classifier assuming that the features obey the multivariate Gaussian distribution. A theoretical analysis that models the number of correctly classified utterances as a hypergeometric random variable enables the derivation of an accurate estimate of the variance of the correct classification rate during cross-validation. By employing such variance estimate, we propose a fast SFFS variant. Experimental findings on Danish emotional speech (DES) and speech under simulated and actual stress (SUSAS) databases demonstrate that SFFS computational time is reduced by 50% and the correct classification rate for classifying speech into emotional states for the selected subset of features varies less than the correct classification rate found by the standard SFFS. Although the proposed SFFS variant is tested in the framework of speech emotion recognition, the theoretical results are valid for any classifier in the context of any wrapper algorithm.
  • Conference Paper
    Full-text available
    We study a number of open issues in spectral clustering: (i) Selecting the appropriate scale of analysis, (ii) Handling multi-scale data, (iii) Clustering with irregular background clutter, and, (iv) Finding automatically the number of groups. We first propose that a ‘local’ scale should be used to compute the affinity between each pair of points. This local scaling leads to better clustering especially when the data includes multiple scales and when the clusters are placed within a cluttered background. We further suggest exploiting the structure of the eigenvectors to infer automatically the number of groups. This leads to a new algorithm in which the final randomly initialized k-means stage is eliminated.
  • Article
    P300-based Guilty Knowledge Test (GKT) has been suggested as an alternative approach for conventional polygraphy. The purpose of this study was to extend a previously introduced pattern recognition method for the ERP assessment in this application. This extension was done by the further extending the feature set and also the employing a method for the selection of optimal features. For the evaluation of the method, several subjects went through the designed GKT paradigm and their respective brain signals were recorded. Next, a P300 detection approach based on some features and a statistical classifier was implemented. The optimal feature set was selected using a genetic algorithm from a primary feature set including some morphological, frequency and wavelet features and was used for the classification of the data. The rates of correct detection in guilty and innocent subjects were 86%, which was better than other previously used methods.
  • Article
    A computer-based classification system has been designed capable of distinguishing patients with depression from normal controls by event-related potential (ERP) signals using the P600 component. Clinical material comprised 25 patients with depression and an equal number of gender and aged-matched healthy controls. All subjects were evaluated by a computerized version of the digit span Wechsler test. EEG activity was recorded and digitized from 15 scalp electrodes (leads). Seventeen features related to the shape of the waveform were generated and were employed in the design of an optimum support vector machine (SVM) classifier at each lead. The outcomes of those SVM classifiers were selected by a majority-vote engine (MVE), which assigned each subject to either the normal or depressive classes. MVE classification accuracy was 94% when using all leads and 92% or 82% when using only the right or left scalp leads, respectively. These findings support the hypothesis that depression is associated with dysfunction of right hemisphere mechanisms mediating the processing of information that assigns a specific response to a specific stimulus, as those mechanisms are reflected by the P600 component of ERPs. Our method may aid the further understanding of the neurophysiology underlying depression, due to its potentiality to integrate theories of depression and psychophysiology.
  • Article
    The Event-Related Potential (ERP) is a time-locked measure of electrical activity of the cerebral surface representing a distinct phase of cortical processing. Two components of the ERP which bear special importance to stimulus evaluation, selective attention, and conscious discrimination in humans are the P300 positivity and N200 negativity, appearing 300 ms and 200 ms post-stimulus, respectively. With the rapid proliferation of high-density EEG methods, and interdisciplinary interest in its application as a prognostic, diagnostic, and investigative tool, an understanding of the underpinnings of P300 and N200 physiology may support its application to both the basic neuroscience and clinical medical settings. The authors present a synthesis of current understanding of these two deflections in both normal and pathological states.
  • Article
    Full-text available
    Adapted waveform analysis uses a library of orthonormal bases and an efficiency functional to match a basis to a given signal or family of signals. It permits efficient compression of a variety of signals, such as sound and images. The predefined libraries of modulated waveforms include orthogonal wavelet-packets and localized trigonometric functions, and have reasonably well-controlled time-frequency localization properties. The idea is to build out of the library functions an orthonormal basis relative to which the given signal or collection of signals has the lowest information cost. The method relies heavily on the remarkable orthogonality properties of the new libraries: all expansions in a given library conserve energy and are thus comparable. Several cost functionals are useful; one of the most attractive is Shannon entropy, which has a geometric interpretation in this context
  • Article
    The t-CWT, a novel method for feature extraction from biological signals, is introduced. It is based on the continuous wavelet transform (CWT) and Student's t-statistic. Applied to event-related brain potential (ERP) data in brain- computer interface (BCI) paradigms, the method provides fully automated detection and quantification of the ERP components that best discriminate between two samples of EEG signals and are, therefore, particularly suitable for classification of single-trial ERPs. A simple and fast CWT computation algorithm is proposed for the transformation of large data sets and single trials. The method was validated in the BCI Competition 2003 , where it was a winner (provided best classification) on two data sets acquired in two different BCI paradigms, P300 speller and slow cortical potential (SCP) self-regulation. These results are presented here.