Content uploaded by Patrick Ofner
Author content
All content in this area was uploaded by Patrick Ofner on Jun 16, 2016
Content may be subject to copyright.
Movements of the same upper limb can be
classified from low-frequency time-domain
EEG signals
Patrick Ofner1, Andreas Schwarz1, Joana Pereira1, Gernot R. M¨uller-Putz1
patrick.ofner@tugraz.at, gernot.mueller@tugraz.at
1Institute of Neural Engineering, Graz University of Technology, Graz, Austria
Introduction
A neuroprosthesis can restore movement functions of spinal cord injured
persons. It benefits from a brain-computer interface (BCI) with a high
number of control classes. However, classical sensorimotor rhythm-based
BCIs can often only provide less than 3 classes, and new types of BCIs
need to be developed. We investigated whether low-frequency time-
domain signals can be used to classify hand/arm movements of
the same limb. A BCI relying on the imagination of such movements
may be used to control a neuroprosthesis more naturally and provide a
higher number of control classes.
Paradigm
15 healthy subjects
hand open/close, supination/pronation, and elbow extension/flexion
60 trials/class
61 EEG channels + joint angles (for movement onset detection)
Figure 1: Left: Subjects executed movements using an Armeo Spring rehabili-
tation device (Hocoma, Switzerland). Right: These movements were classified.
Figure 2: Sequence of a trial.
Methods
artefact removal
0.3 - 3 Hz 4-th order zero-phase Butterworth filter
shrinkage regularized linear discriminant analysis (sLDA) classifier
1-vs-1 classification strategy
10x10-fold cross-validation
calculation of sLDA patterns [1] in source space, see Figure 3
Figure 3: Patterns are calculated from each 1-vs-1 classifier; subsequently scaled
and transformed into the source space; then we calculated the absolute value and
averaged over patterns. Finally, we averaged over time from -0.5 s to 0.5 s relative
to movement onset.
Results
average classification accuracy: maximum of 41 % (7 % standard
deviation) at 0.125 s, see Figure 4
significance level of the average classification accuracy: 18 %
significance level of the classification accuracy for each subject: 24 %
α= 0.05, Bonferroni corrected wrt. the length of the presented time
window
all subjects reached significant classification accuracies
the confusion matrix in Figure 4 indicates that movements involving
the same joints (e.g. hand open vs hand close) are less discriminable
than movements involving different joints (e.g. hand open vs arm ex-
tension)
t [s]
-2 -1.5 -1 -0.5 0 0.5 1 1.5
acc [%]
0
10
20
30
40
50
60
70
6.6
3
2.7
2.6
2.2
1.6
3
6.7
2.6
2.3
1.4
1.4
2.3
2.3
5.6
3.5
1.4
1.6
2.2
1.9
3.3
6.3
1.3
1.3
1.5
1.2
1.2
1.2
7.8
2.9
1.4
1.1
1.2
1
2.4
7.9
Fle Ext Sup Pro Clo Opn
Fle
Ext
Sup
Pro
Clo
Opn
known
predicted
Figure 4: Left: Subjects’ classification accuracies (time locked to movement on-
set); the bold line is the grand average. Solid line is the chance level is; the dashed
line is the significance level for the grand average. Right: Confusion matrix with
relative values.
Figure 5: Classifier pattern averaged over all subjects. Blue corresponds to zero,
red to the maximum value. Only significant voxels wrt. the reference period (-2 s
to 1 s) are colored (α= 0.05, Wilcoxon signed rank test, Bonferroni corrected wrt.
the number of EEG channels).
Discussion
We have shown that low-frequency time domain signals can be used
to discriminate between different movements of the same
upper limb. Movement accuracies peak after the movement onset but
reach significantly high classification accuracies before the movement on-
set. This shows that upcoming movements can be classified from the
movement planning phase. This is crucial for a BCI applicable for
end users with SCI who cannot execute all movements anymore. Fur-
thermore, movements involving different joints are better disciminable
than movements involving the same joints.
References
1. X Liao, D Yao, D Wu, and C Li, Combining Spatial Filters for the Classification of Single-Trial EEG in a Finger Movement Task IEEE Trans
Biomed Eng, 54(5), 821-831, 2007
Acknowledgments
This work is supported by the European ICT Programme
Project H2020-643955 ”MoreGrasp” and the ERC Con-
solidator Grant ERC-681231 ”Feel Your Reach”.