ArticlePDF Available

Extracting neurophysiological signals reflecting users' emotional and affective responses to BCI use: A systematic literature review

Authors:

Abstract and Figures

Background: Brain-computer interfaces (BCIs) allow persons with impaired mobility to communicate and interact with the environment, supporting goal-directed thinking and cognitive function. Ideally, a BCI should be able to recognize a user's internal state and adapt to it in real-time, to improve interaction. Objective: Our aim was to examine studies investigating the recognition of affective states from neurophysiological signals, evaluating how current achievements can be applied to improve BCIs. Methods: Following the PRISMA guidelines, we performed a literature search using PubMed and ProQuest databases. We considered peer-reviewed research articles in English, focusing on the recognition of emotions from neurophysiological signals in view of enhancing BCI use. Results: Of the 526 identified records, 30 articles comprising 32 studies were eligible for review. Their analysis shows that the affective BCI field is developing, with a variety of combinations of neuroimaging techniques, selected neurophysiological features, and classification algorithms currently being tested. Nevertheless, there is a gap between laboratory experiments and their translation to everyday situations. Conclusions: BCI developers should focus on testing emotion classification with patients in ecological settings and in real-time, with more precise definitions of what they are investigating, and communicating results in a standardized way.
Content may be subject to copyright.
Uncorrected Author Proof
NeuroRehabilitation xx (20xx) x–xx
DOI:10.3233/NRE-151266
IOS Press
1
Review Article1
Extracting neurophysiological signals
reflecting users’ emotional and affective
responses to BCI use: A systematic literature
review
2
3
4
5
Giulia Liberatia,, Stefano Federiciband Emanuele Pasqualottoa
6
aUniversit´
eCatholique de Louvain, Institute of Neuroscience, Louvain, Belgium7
bUniversit`a di Perugia, Department of Philosophy, Social & Human Sciences and Education, Perugia, Italy
8
Abstract.9
BACKGROUND: Brain–computer interfaces (BCIs) allow persons with impaired mobility to communicate and interact with the
environment, supporting goal-directed thinking and cognitive function. Ideally, a BCI should be able to recognize a user’s internal
state and adapt to it in real-time, to improve interaction.
10
11
12
OBJECTIVE: Our aim was to examine studies investigating the recognition of affective states from neurophysiological signals,
evaluating how current achievements can be applied to improve BCIs.
13
14
METHODS: Following the PRISMA guidelines, we performed a literature search using PubMed and ProQuest databases. We
considered peer-reviewed research articles in English, focusing on the recognition of emotions from neurophysiological signals
in view of enhancing BCI use.
15
16
17
RESULTS: Of the 526 identified records, 30 articles comprising 32 studies were eligible for review. Their analysis shows that
the affective BCI field is developing, with a variety of combinations of neuroimaging techniques, selected neurophysiological
features, and classification algorithms currently being tested. Nevertheless, there is a gap between laboratory experiments and
their translation to everyday situations.
18
19
20
21
CONCLUSIONS: BCI developers should focus on testing emotion classification with patients in ecological settings and in
real-time, with more precise definitions of what they are investigating, and communicating results in a standardized way.
22
23
Keywords:24
1. Introduction25
Brain–computer interfaces (BCIs) are systems that
26
measure brain signals, extract specific features from
27
these signals, and translate these features into out-
28
put signals, serving as direct communication pathways
Address for correspondence: Giulia Liberati, Universit´
e
Catholique de Louvain, Institute of Neuroscience, Louvain,
Belgium. E-mail: giulia.liberati@uclouvain.be.
between the brain and external devices (Birbaumer, 29
2006a; Wolpaw, Birbaumer, McFarland, Pfurtscheller, 30
& Vaughan, 2002). Because they do not rely on mus- 31
cular activity, BCIs could represent the only means 32
for persons with impaired mobility to communicate 33
and interact with their environment. In this sense, 34
they promote individuals’ sense of sense of agency 35
and goal-directed thinking, and therefore support cog- 36
nitive function, that would otherwise be lost due to 37
1053-8135/15/$35.00 © 2015 – IOS Press and the authors. All rights reserved
Uncorrected Author Proof
2G. Liberati et al. / Extracting neurophysiological signals reflecting
the lack of actions and environmental feedback (Bir-
38
baumer, 2006b; K¨
ubler & Birbaumer, 2008; Liberati39
& Birbaumer, 2012). Hence, BCIs can be regarded
40
in all respects not only as assistive technology (AT),41
but also as “cognitive prosthetics” or cognitive sup-42
port technologies (CST, Chu, Brown, Harniss, Kautz,43
& Johnson, 2014) for individuals who have lost motor
44
function.
45
Most of the past research in the BCI field has
46
focused predominantly on the improvement of tech-47
nical aspects, such as enhancing information transfer
48
rate and signal classification accuracy (Nicolas-Alonso
49
& Gomez-Gil, 2012). In their systematic review on50
electroencephalography (EEG)-based BCIs, compris-
51
ing 127 research articles, Pasqualotto and collaborators52
(2012) reported that a user-centered perspective was53
rarely adopted. It is well known that the lack of54
consideration of user’s experience can easily lead to
55
the abandonment of assistive technology (Federici &56
Borsci, 2014; Federici & Scherer, 2012; Pasqualotto,57
Simonetta, Federici, & Olivetti Belardinelli, 2009).
58
The last years have seen a growing awareness of the
59
importance of adopting a user-centered approach in the60
evaluation of BCIs, e.g. assessing specific aspects of the61
user’s experience, such as usability, motivation, satis-62
faction, and improvement of the quality of life (Holz,63
Botrel, Kaufmann, & K¨
ubler, 2015; K¨
ubler et al., 2014;64
Nijboer, Birbaumer, & K¨
ubler, 2010; Nijboer, Plass-
65
Oude Bos, Blokland, van Wijk, & Farquhar, 2014;66
Pasqualotto et al., 2015; Schettini et al., 2015; Simon
67
et al., 2014; Zickler, Halder, Kleih, Herbert, & K¨
ubler,
68
2013). Although less investigated, the necessity of eval-69
uating the user’s ongoing affective and emotional states
70
during the control of BCI systems and other technol-
71
ogyhas been recentlyemphasized by severalresearchers72
(Brouwer, Zander, van Erp, Korteling, & Bronkhorst,73
2015; Molina, Tsoneva, & Nijholt, 2009; Zander &74
Jatzev, 2009). A recent focus group study (Liberati et al.,
75
2015), which investigated the requirements of persons76
with amyotrophic lateral sclerosis (ALS) with respect to77
BCI use, highlighted how the training to control a BCI
78
can be perceived as very stressful and time-consuming,79
potentially leading to loss of motivation. BCI users often80
feel angry or frustrated when a communication system81
fails to convey messages adequately, and slow commu-82
nication may cause significant panic, anxiety, and loss83
of control when the user needs to communicate some-84
thing urgently. These spontaneous changes in the mental85
state of the user may disrupt the electrophysiological86
response and deteriorate BCI performance (Holz et al.,87
2015; Reuderink, Poel, & Nijholt, 2011).88
The relevance of emotions in the interaction with 89
the environment is not a novel concept, and has been 90
extensively discussed by neurologist Antonio Damasio 91
(1994). In his book “Descartes’ Error: Emotion, 92
Reason, and the Human Brain”, Damasio emphasized 93
how emotions and feelings may not be intruders in 94
the bastion of reason at all: they may be enmeshed in 95
its networks, for worse and for better. From his per- 96
spective, most of the interactions with the environment 97
take place because the organism requires their occur- 98
rence to maintain homeostasis and survival. Emotions 99
play a fundamental role in these interactions, allow- 100
ing to communicate meanings to others, as well as 101
acting as a cognitive guidance, to ensure survival and 102
well-being. BCIs may represent no exception concern- 103
ing the utility of emotions and affectivity in enhancing 104
cognitive experience. Emotional responses could help 105
accomplishing useful goals, such as displaying satisfac- 106
tion or disappointment towards BCI systems with the 107
aim of improving them, and providing a more thorough 108
experience with the environment. From the perspective 109
of supporting individuals’ cognitive function, taking 110
affective states into account allows promoting a more 111
thorough environmental feedback, which is fundamen- 112
tal for retaining one’s sense of agency (Haselager, 2013) 113
and goal-directed thinking (Birbaumer et al., 2006). 114
According to Picard (2000), the distance between 115
user and machine can be reduced by taking emotional 116
content into account, a process known as affective 117
computing. Consistently, Azcarraga and Suarez (2013) 118
pointed out the importance of developing affective 119
tutoring systems able to monitor the emotions that 120
typically emerge when facing new tasks, such as con- 121
fidence, excitement, frustration, interest, engagement, 122
boredom or confusion, in order to facilitate learning 123
processes. Along the same line, Molina and collabora- 124
tors (2009) observed that emotions and affective states 125
should not be seen as obstacles to BCI use. Indeed, 126
the awareness of how these states influence brain activ- 127
ity patterns could lead to the development of adaptive 128
BCIs, able to interpret the user’s intention in spite of 129
signal deviations. As emphasized by Blankertz and col- 130
leagues (2003), the aim of BCI developers should be to 131
minimize the user’s training, by imposing the major 132
learning load on the computer rather than on the per- 133
son. In other words, the computer should recognize the 134
user’s covert states and adapt to them in real-time, in 135
order to improve its function. 136
These considerations are in line with a new perspec- 137
tive established in the last years, according to which not 138
only voluntary self-regulated signals can be used as BCI 139
Uncorrected Author Proof
G. Liberati et al. / Extracting neurophysiological signals reflecting 3
input, but also signals relative to the user’s state (e.g.
140
his/her emotions) can provide important information141
for BCI use (Nijboer et al., 2009). To this end, the terms
142
“passive BCI” (pBCI, Brouwer et al., 2015; Cotrina143
et al., 2014; Roy, Bonnet, Charbonnier, & Campagne,144
2013; Zander & Jatzev, 2009; Zander & Kothe, 2011;145
Zander, Kothe, Jatzev, & Gaertner, 2010) and “affec-
146
tive BCI” (aBCI, Liberati et al., 2012; Nijboer et al.,
147
2009; van der Heiden et al., 2014; Widge, Dougherty, &
148
Moritz, 2014) are becoming more and more popular. By149
relying on spontaneous brain activity, these systems aim
150
to assess covert aspects of the user’s state, without inter-
151
fering with other cognitive tasks (Van Erp, Brouwer,152
& Zander, 2015). A further advantage of this novel
153
approach is that the recognition of covert states could154
also be performed with users with cognitive impairment155
(e.g. dementia), who have no means to undergo exten-156
sive trainings and learn how to use traditional BCIs
157
(Liberati et al., 2012a, 2012b, 2013).158
1.1. Rationale159
To this date, there is no systematic review on the160
recognition of emotional and affective states from161
neurophysiological responses (i.e. aBCIs), and the con-162
sequent possibility to improve users’ interaction with163
traditional BCIs and/or other AT.164
By following the PRISMA (Preferred Reporting
165
Items for Systematic Reviews and Meta-analyses) cri-166
teria (Liberati et al., 2009; Moher, Liberati, Tetzlaff, &
167
Altman, 2009), our aim is to examine studies that have
168
investigated the recognition of affective states from neu-169
rophysiological signals, shedding light on how current
170
knowledge and achievements in the aBCI field can be
171
applied to improve BCI communication and control,172
providing the user with a more comprehensive interac-173
tion with others and the environment. In particular, we174
aim to evaluate whether at the current state of the art, the
175
need of considering affectivity during BCI use, which176
has strongly emerged from recent studies with patients177
and possible BCI users (Holz et al., 2015; Liberati et
178
al., 2015), can be adequately met.179
2. Methods180
2.1. Eligibility criteria181
We considered research articles published in
182
peer-reviewed journals in English, focusing on the183
recognition of affective and emotional states from neu-184
rophysiological signals acquired through non-invasive 185
techniques, in view of enhancing BCI use. Studies 186
simply focusing on the neural correlates of affective 187
and emotional states, without a defined recognition or 188
classification method, and therefore without a direct 189
implication for BCI use, were excluded from the anal- 190
ysis. In absence of a strict peer-review process, articles 191
derived from conference proceedings were not taken 192
into account. 193
2.2. Information sources and research terms 194
We identified articles by searching PubMed and Pro- 195
Quest electronic databases, and by scanning reference 196
lists of pertinent review articles, editorials, and hand- 197
books (Kim, Kim, Oh, & Kim, 2013; M¨
uhl, Allison, 198
Nijholt, & Chanel, 2014a, 2014b; M¨
uhl, Heylen, & 199
Nijholt, 2014; Van Erp, Brouwer, & Zander, 2015). The 200
last search was run on July 10, 2015. 201
We used the following research terms: “affective 202
brain computer interface(s)”; “passive brain computer 203
interface(s)”; emotion(s) & “brain computer inter- 204
face(s)”; “emotion classification”. The terms could 205
appear at any point of the articles’ full text. 206
2.3. Study selection and data collection process 207
Eligibility assessment was performed independently 208
in a blinded standardized manner by authors G.L. 209
and E.M. A first screening was based on the arti- 210
cle abstracts. A following selection was performed 211
after reading the articles in full-text. Disagreements 212
between reviewers were resolved by consensus. We 213
developed a data extraction sheet (Table 1), which was 214
pilot-tested on ten randomly selected included studies 215
and refined accordingly. G.L. extracted the data from 216
the studies and E.P. checked the extracted data. Dis- 217
agreements were resolved by discussion between the 218
authors. 219
2.4. Data items 220
Information was extracted from each study on the 221
following aspects: 222
1. Affective states investigated 223
2. Emotion elicitation method 224
3. Participants 225
a. Number of healthy subjects 226
b. Number and category of patients
Uncorrected Author Proof
4G. Liberati et al. / Extracting neurophysiological signals reflecting
4. Data collection
227
a. Data acquisition technique228
b. Device portability
229
5. Emotion recognition technique230
a. Feature extraction231
b. Type of classifier232
6. Performance of classifier
233
a. Classification accuracy
234
b. Online (real-time) vs. offline classification
235
3. Results
236
3.1. Study selection and characteristics
237
The search on PubMed and ProQuest databases pro-
238
vided a total of 746 citations. Nineteen additional
239
records were identified by checking the references of240
relevant papers. After removing duplicates, 526 records241
remained. Of these, 482 were discarded, because their
242
abstract clearly indicated that they were out of scope.243
The full text of the remaining 44 citations was examined
244
in more detail. Fourteen studies did not meet the inclu-245
sion criteria and were therefore excluded from further246
analysis. Thirty articles comprising a total of 32 stud-247
ies met the inclusion criteria, and were included in the248
systematic review. The whole selection process can be
249
seen in the flow diagram in Fig. 1.250
All 32 studies were experiments on the recogni-
251
tion/classification of affective states from neurophys-
252
iological features, for the development of an aBCI.
253
3.2. Results254
3.3.1. Affective states investigated255
The majority of studies (59%) discriminated affec-256
tive states according to a dimensional model (Russell
257
& Mehrabian, 1977), either based on valence (Chanel,258
Kierkels, Soleymani, & Pun, 2009; Hadjidimitriou &
259
Hadjileontiadis, 2013, 2012; Hidalgo-Mu˜
noz, L´
opez,
260
Pereira, Santos, & Tom´
e, 2013; Hosseini et al., 2011;261
Kashihara, 2014; Lee & Hsieh, 2014; Stikic, Johnson,262
Tan, & Berka, 2014; Tai & Chau, 2009), on valence and263
arousal (Baucom, Wedell, Wang, Blitzer, & Shinkareva,264
2012; Frantzidis et al., 2010a, 2010b; Heger, Herff,265
Putze, Mutter, & Schultz, 2014; Hosseini & Naghibi-
266
Sistani, 2011; Jie, Cao, & Li, 2014; Moghimi, Kushki,267
Power, Guerguerian, & Chau, 2012; Soleymani, Pan-268
tic, & Pun, 2012; Yoon & Chung, 2013), or on valence,269
arousal, and dominance/control (Koelstra & Patras,270
2013).271
The remaining 41% percent of studies discrim- 272
inated different emotion categories, (Azcarraga & 273
Suarez, 2013; Chanel, Rebetez, B´
etrancourt, & Pun, 274
2011; Jenke, Peer, & Buss, 2014; Kassam, Markey, 275
Cherkassky, Loewenstein, & Just, 2013; Lin, Yang, & 276
Jung, 2014; Mikhail, El-Ayat, Coan, & Allen, 2013; 277
Murugappan, Ramachandran, & Sazali, 2010; Petran- 278
tonakis & Hadjileontiadis, 2010; Sitaram et al., 2011; 279
Yuvaraj et al., 2014), in some cases selected in order 280
to cover the valence-arousal space (Jenke et al., 2014; 281
Lin et al., 2014; Yuvaraj et al., 2014). The num- 282
ber of emotion categories classified within a single 283
experiment varied across studies. The highest num- 284
ber of classified emotions was nine (Kassam et al., 285
2013), followed by six (Petrantonakis & Hadjileon- 286
tiadis, 2010; Yuvaraj et al., 2014), five (Murugappan 287
et al., 2010), four (Azcarraga & Suarez, 2013; Lin et 288
al., 2014; Mikhail et al., 2013), three (Chanel et al., 289
2011; Sitaram et al., 2011), and two (Sitaram et al., 290
2011) emotional categories. The emotions that were 291
classified more frequently were Paul Ekman’s basic 292
and universal emotions (Ekman et al., 1987), namely 293
anger, disgust, fear, happiness, sadness, and surprise 294
(Jenke et al., 2014; Kassam et al., 2013; Lin et al., 295
2014; Mikhail et al., 2013; Murugappan et al., 2010; 296
Sitaram et al., 2011; Soleymani et al., 2012; Yuvaraj 297
et al., 2014). Other investigated affective states were 298
related to the performance of a specific task with dif- 299
ferent levels of difficulty (i.e. solving math problems 300
or playing a videogame, and comprised confidence, 301
excitement, frustration, interest/engagement, boredom, 302
and anxiety (Azcarraga & Suarez, 2013; Chanel et al., 303
2011). One study also investigated pleasure and neu- 304
trality, as well as more complex affective states such 305
as pride, shame, envy and lust (Kassam et al., 2013), 306
whereas another study included calmness and curios- 307
ity (Jenke et al., 2014). Specific definitions of these 308
affective states were generally not provided. 309
3.3.2. Emotion elicitation methods 310
In order to test affective state classification in a 311
controlled context, several different emotion elicita- 312
tion methods were used. A relatively high number 313
of studies (37%) used visual stimuli to evoke differ- 314
ent affective states (Baucom et al., 2012; Frantzidis 315
et al., 2010a, 2010b; Hidalgo-Mu˜
noz et al., 2013; Hos- 316
seini & Naghibi-Sistani, 2011; Hosseini et al., 2011; 317
Jenke et al., 2014; Petrantonakis & Hadjileontiadis, 318
2010; Sitaram et al., 2011; Tai & Chau, 2009). Pic- 319
tures drawn from the International Affective Picture 320
System (IAPS, Lang, Bradley, & Cuthbert, 1999) were 321
Uncorrected Author Proof
G. Liberati et al. / Extracting neurophysiological signals reflecting 5
Table 1
Data extraction sheet summarizing the articles included in the systematic review. For each of the articles included in our work, we extracted the following information: affective states
investigated; method used to elicit the states; the number of participants included (all healthy subjects, unless otherwise specified); selected features; feature extraction method; type of
classifier used to discriminate the features; whether the classification was performed online or not; classification performance
Publication Affective
states
Emotion
elicitation
Participants Technique Other peripheral
data
Portability Selected
features
Feature extraction
method
Type of
classifier
Online Performance
Azcarraga et al.
(2013)
Confidence,
excitement,
frustration,
interest
Mathematical
task
16 EEG* Mouse clicks and
movements
Time-domain:
normalized
difference
between raw value
of each channel
and resting-state
Computation of
signal value at
each EEG channel
MLP, SVM x Classification
performance: 43%
SVM, 65% MLP;
Combination wth
mouse: 51%
SVM, 73% MLP
Baucom et al.
(2012)
Valence and arousal IAPS 13 fMRI x Voxels with the most
stable responses
across multiple
trials
Computation of PSC
relative to average
activity for each
voxel
Logistic
regression
x Classification
accuracy
significantly
higher than chance
level for both
valence and
arousal (max:
92%)
Chanel et al.
(2009)
Valence Recall of past
episodes
10 EEG GSR, BVP, chest
cavity expansion.
Frequency features;
pairwise MI
features;
peripheral features
STFT; MI LDA, QDA, SVM,
RVM
x Classification
accuracy: 63%
using
time-frequency
features and SVM;
70% after fusion
of different
features sets; 80%
after rejection of
non-confident
samples
Chanel et al.
(2011)
Anxiety, boredom,
engagement
Videogame 14 EEG GSR, BVP, chest
cavity expansion,
skin temperature
Frequency features:
theta, alpha, beta;
peripheral features
FFT LDA, QDA, SVM, x Classification
accuracy: up to
63% using LDA
and fusion of
different features;
over chance level
classification for
all classification
methods
Frantzidis et al.
(2010a)
Valence and arousal IAPS 28 EEG Time-domain
features: ERPs;
time-frequency
features: delta,
theta, alpha
Amplitude and
latency of ERP;
DWT
MD, SVM x Classification
accuracy: 79.5%
and 81.3% for MD
and SVM
respectively
Uncorrected Author Proof
6G. Liberati et al. / Extracting neurophysiological signals reflecting
Table 1
(Continued)
Publication Affective
states
Emotion
elicitation
Participants Technique Other peripheral
data
Portability Selected
features
Feature extraction
method
Type of
classifier
Online Performance
Frantzidis et al.
(2010b)
Valence and arousal IAPS 28 EEG EDA Time-domain
features: ERPs;
time-frequency
features: delta,
theta; ERD/ERS
Amplitude and
latency of ERP;
digital filtering;
ERD/ERS
computation
MD x Classification
accuracy: 77.68%
Hadjidimitriou
and Had-
jileontiadis
(2012)
Valence Music 9 EEG* – Frequency features:
delta, theta, alpha,
beta, gamma;
ERD/ERS
FFT; ZAM SVM, k-NN, QDA,
MD
x Classification
accuracy: up to
86.52% using
k-NN
Hadjidimitriou
and Had-
jileontiadis
(2013)
Valence Music 9 EEG* – Time-frequency
features:
ERD/ERS
Time-windowing TF
analysis; ZAM
k-NN, SVM x Classification
accuracy: up to
91% for familiar
musical excerpts;
up to 87% for
unfamiliar musical
excerpts; k-NN
outperforming
SVM
Heger et al.
(2014)
Valence and arousal IAPS and IADS
(combined)
8 fNIRS – time-domain
features: mean;
time-frequency
features:
Daubechies
wavelets
Mean calculation;
DWT
LDA x Classification
accuracy generally
above
chance-level for
high and low
valence and
arousal
Hidalgo Mu˜
noz
et al. (2013)
Valence IAPS 26 EEG Frequency features:
spectral turbulence
FFT SVM x Classification
accuracy: up to
80.77%
Hosseini et al.
(2011a)
Valence Photographs 5 fNIRS – Principal
components
PCA SVM x Classification
accuracy: 72.9%
for positive
valence, 68.3% for
negative valence;
up to 80% in some
participants
Hosseini et al.
(2011b)
Valence and arousal IAPS 15 EEG ApEn, WE Entropy analysis SVM x Classification
accuracy: 73.25%
Jenke et al.
(2014)
Anger, calmness,
curiosity,
happiness, sadness
IAPS 16 EEG Features from
multiple domains
Multiple feature
extraction
methods
QDA x Higher average
accuracy of
multivariate
methods; HOC,
HOS, and HHS
outperforming
spectral power
bands.
Uncorrected Author Proof
G. Liberati et al. / Extracting neurophysiological signals reflecting 7
Jie et al. (2014) Valence and arousal Music videos 32 EEG SampEn SampEn analysis SVM x Classification
accuracy: 80.43%
for valence,
79.11% for arousal
Kashihara
(2014)
Valence Neutral faces
conditioned
with aversive
stimuli
22 EEG Time-domain
features: ERPs;
time-frequency
features
Amplitude and
latency of ERP;
WT
SVM x Classification
accuracy: above
chance level, up to
80%
Kassam et al.
(2013)
Anger, disgust, envy,
fear, happiness,
lust, pride,
sadness, shame
Self-induction 10 fMRI x Voxels with the most
stable responses
across multiple
trials
Voxel selection GNB x Classification
accuracy: 84%
within-subject,
70%
across-subjects
Koelstra et al.
(2013)
Valence, arousal,
control
Videos 30 EEG Facial expressions Frequency features:
PSD in theta,
alpha, beta and
gamma;
lateralization;
peripheral features
FT SVM x Classification
accuracy:
significantly above
chance level; the
use of EEG
features
outperforms face
expression
analysis
Lee et al. (2014) Valence Videos 40 EEG Frequency features:
functional
connectivity
FFT QDA x Classification
accuracy:
significantly above
chance level
Lin et al. (2014) Joy, anger, sadness,
pleasure
Music 26 EEG Music properties Frequency features:
PSD in theta,
alpha, beta and
gamma;
lateralization;
music features
STFT SVM x Classification
accuracy: 74-76%
Mikhail et al.
(2013)
Joy, anger, fear,
sadness
Performance of
facial
expressions
36 EEG Frequency features:
theta, alpha, beta
and gamma
FFT SVM x Classification
accuracy: 51% for
joy, 53% for anger,
58% for fear, 61%
for sadness
Moghimi (2012) Valence and arousal Music 10 fNIRS Laterality features;
single
channel-features
Quantification of
right and left
differences;
computation of
mean, slope, and
coefficient of
variation;
LDA x Classification
accuracy: 71.94%
for valence,
71.93% for arousal
Uncorrected Author Proof
8G. Liberati et al. / Extracting neurophysiological signals reflecting
Table 1
(Continued)
Publication Affective
states
Emotion
elicitation
Participants Technique Other peripheral
data
Portability Selected
features
Feature extraction
method
Type of
classifier
Online Performance
Murugappan
et al. (2007)
Anger, disgust, fear,
happiness,
sadness, surprise
Videos 6 EEG – Time-frequency
features: energy,
entropy
DWT FCM, FKM x Discrimination of
happiness, disgust
and fear
(clustering
analysis)
Murugappan
et al. (2010)
Disgust fear,
happiness,
neutrality, surprise
Videos 20 EEG Time-frequency
features: delta,
theta, alpha, beta,
gamma
WT k-NN, LDA x Classification
accuracy: 83.26%
using k-NN,
75.21% using
LDA
Petrantonakis
et al. (2010b)
Anger, disgust, fear,
happiness,
sadness, surprise
Facial
expressions
16 EEG Time domain;
frequency
features: alpha and
beta
HOC QDA, k-NN, SVM,
MD
x Classification
accuracy using a
HAF filtering
procedure: 85.17%
Sitaram et al.
(2011)
Disgust, happiness IAPS/memory
recall
(different for
training and
testing)
4 fMRI x Highly activated
voxels
EM SVM Classification
accuracy: 80%
Sitaram et al.
(2011)
Disgust, happiness IAPS/memory
recall (same
for training
and testing)
12 fMRI x Highly activated
voxels
EM SVM Classification
accuracy: 92%
Sitaram et al.
(2011)
Disgust, happiness,
sadness
IAPS/memory
recall (same
for training
and testing)
4 fMRI x Highly activated
voxels
EM SVM Classification
accuracy: 62%
Soleymani et al.
(2012)
Valence and arousal Videos 24 EEG GSR,ECG,
respiration, skin
temperature,
eye-gaze
Frequency features:
PSD in theta,
alpha, beta and
gamma;
lateralization;
peripheral features
FFT; Welch’s method SVM x Classification
accuracy: up to
68.5% for valence,
76.4% for arousal
using a modality
fusion strategy
Stikic et al.
(2014)
Valence Videos/narrative
story-telling
63 EEG EMG Frequency features:
PSD
FFT LDA, QDA x Classification
accuracy: 74.3%
and 94.5% using
LDA, for general
and individualized
models
respectively;
67.3% and 93.5%
using QDA, for
general and
individualized
models
respectively
Uncorrected Author Proof
G. Liberati et al. / Extracting neurophysiological signals reflecting 9
Tai and Chau
(2009)
Valence IAPS 10 fNIRS – Time-domain and
time-frequency
features
GA LDA, SVM x Classification
accuracy: up to
75%
Yoon et al.
(2013)
Valence and
arousal
Music videos 32 EEG Frequency features FFT Perceptron
convergence
algorithm
x Classification
accuracy: for
two-level classes,
70.9% for valence
and 70.1% for
arousal;0.1%,
respectively; for
three-level classes
55.4% for valence
and and 55.2% for
arousal
Yuvaraj et al.
(2014)
Anger, disgust, fear,
happiness,
sadness, surprise
Audio-visual
stimuli,
including
IAPS+IADS
20 (healthy)+20
(Parkinson
patients)
EEG* – Frequency features:
PSD, HOS-Based
features
DFT k-NN, SVM x Classification
accuracy: 66.70%
and 70.51% using
SVM, for patients
and healthy
controls
respectively;
64,26% and
67.84% using
k-NN, for patients
and healthy
controls
respectively
List of abbreviations ApEn: Approximate entropy; BVP: Blood volume pulse; EDA: Electrodermal activity; DFT: Discrete Fourier transform; DWT: Discrete wavelet transform; ECG:
Electrocorticography; EEG: Electroencephalography; EM: Effect mapping; EMG: Electromyography; ERD: Event-related desynchronization; ERP: Event-related potential; ERS: Event-related
synchronization; FCM: Fuzzy C-means; FFT: Fast Fourier transform; FKM: Fuzzy k-means; fMRI: Functional magnetic resonance imaging; fNIRS: Functional near-infrared spectroscopy;
FT: Fourier transform; GA: Genetic algorithm; GNB: Gaussian na¨
ıve Bayes; GSR: Galvanic skin response; HHS: Hilbert-Huang spectrum; HOC: Higher order crossing; HOS: Higher-order
spectra; IADS: International affective digitized sounds; IAPS: International affective picture system; k-NN: K-nearest neighbor; LDA: Linear discriminant analysis; MD: Mahalanobis distance;
MI: Mutual information; MLP: Multilayer perceptron; PCA: Principal component analysis; PSC: Percent signal change; PSD: Power spectral density; QDA: Quadratic discriminant analysis;
RVM: Relevance vector machine; SampEn: Sample entropy; STFT: Short-time Fourier transform; SVM: Support vector machine; WE: Wavelet entropy; WT: Wavelet transform; ZAM:
Zhao-Atlas-Marks.
Uncorrected Author Proof
10 G. Liberati et al. / Extracting neurophysiological signals reflecting
Fig. 1. Flow of information through the different phases of the systematic review. Records were first identified through database searching and
other sources (reference lists of pertinent articles). After removing duplicates, records were screened on the basis of their abstracts. The full-texts
of the included articles were assessed for eligibility. Thirty articles, including a total of 32 studies, were finally included in the review.
often used as emotional visual stimuli (Baucom et al.,322
2012; Frantzidis et al., 2010a, 2010b; Hidalgo-Mu˜
noz323
et al., 2013; Hosseini & Naghibi-Sistani, 2011; Jenke
324
et al., 2014; Sitaram et al., 2011; Tai & Chau, 2009),
325
because being standardized according to valence and326
arousal normative ratings, they are considered par-327
ticularly suitable for investigating the valence and328
arousal dimensions of emotions. Other visual stimuli329
used to elicit emotions were photographs of various
330
kinds (Hosseini et al., 2011) and facial expressions
331
(Petrantonakis & Hadjileontiadis, 2010). The latter
332
were used because of findings suggesting that humans333
have a predisposition to react emotionally to facial334
stimuli (Buck, 1980), and because they are consid-335
ered to have limited variability across cultures (Ekman, 336
1999). 337
Few studies (12%) relied on the auditory channel, 338
specifically using music, considered to be particu- 339
larly effective for emotion elicitation (Hadjidimitriou 340
& Hadjileontiadis, 2012, 2013; Lin et al., 2014; 341
Moghimi et al., 2012). Thirty-four percent of studies 342
used multimodal stimuli, combining visual and audi- 343
tory modalities (Heger et al., 2014; Jie et al., 2014; 344
Kashihara, 2014; Koelstra & Patras, 2013; Lee & Hsieh, 345
2014; Murugappan et al., 2007, 2010; Soleymani et al., 346
2012; Stikic et al., 2014; Yoon & Chung, 2013; Yuvaraj 347
et al., 2014), e.g. using videos (Jie et al., 2014; Koel- 348
stra & Patras, 2013; Lee & Hsieh, 2014; Murugappan 349
Uncorrected Author Proof
G. Liberati et al. / Extracting neurophysiological signals reflecting 11
et al., 2007, 2010; Soleymani et al., 2012; Stikic et al.,
350
2014; Yoon & Chung, 2013) or IAPS stimuli combined351
with stimuli drawn from the International Affective
352
Digitized Sounds (IADS, (Heger et al., 2014; Yuvaraj353
et al., 2014), which are also standardized according to354
valence and arousal normative ratings (Bradley & Lang,355
1999). In a more limited number of studies, affective
356
states were self-induced by participants, either follow-
357
ing instructions (Kassam et al., 2013), performing facial
358
expressions (Mikhail et al., 2013), or recalling past359
episodes (Chanel et al., 2009; Sitaram et al., 2011).
360
Only two studies investigated the classification of
361
affective states during the performance of cognitive362
and/or motor tasks in realistic settings (Azcarraga &
363
Suarez, 2013; Chanel et al., 2011). Azcarraga and364
collaborators (2013) aimed to recognize emotions in365
students performing algebra equations. Chanel et al.366
(2011) investigated emotion classification in subjects
367
playing a videogame (Tetris) with different levels of dif-368
ficulty. Importantly, none of the studies investigated the369
recognition of affective states during the use of a stan-
370
dard BCI for communication or environmental control,
371
or during the use of other AT.372
3.3.3. Participants
373
The studies included in the review involved a total of374
598 participants (note that some subjects were shared375
across studies, i.e. between Yoon et al., 2013 and Jie
376
et al., 2014, and within the experiments by Sitaram377
et al., 2011). Only 20 participants belonged to a patient
378
population (Parkinson disease, Yuvaraj et al., 2014).
379
None of the participants presented cognitive impair-380
ment. Because age and gender of the participants were
381
often not reported in the articles, we will not include this
382
information in our analysis. In the majority of cases,383
studies were conducted with university students who384
volunteered to participate.385
3.3.4. Data acquisition386
In most studies (72%), neurophysiological data were387
acquired using EEG (Azcarraga et al., 2013; Chanel
388
et al., 2009, 2011; Frantzidis et al., 2010a, 2010b; Jenke
389
et al., 2014; Hadjidimitriou et al., 2012, 2013; Hidalgo
390
Mu˜
noz et al., 2013; Hosseini et al., 2011; Jie et al.,391
2014; Kashihara et al., 2014; Koelstra et al., 2012;392
Lee et al., 2014; Lin et al., 2014; Mikhail et al., 2013;393
Murugappan et al., 2007, 2010; Petrantonakis & Had-
394
jileontiadis, 2010; Soleymani et al., 2012; Stikic et al.,395
2014; Yoon et al., 2013; Yuvaraj et al., 2014). In sev-396
eral cases, the Emotiv EPOC headset, a wireless EEG
397
system that can be set in a shorter amount of time com-
398
pared to other systems, was used (Azcarraga & Suarez, 399
2013; Hadjidimitriou & Hadjileontiadis, 2013, 2012; 400
Yuvaraj et al., 2014). EEG was often used in combi- 401
nation with the acquisition of peripheral data, such as 402
galvanic skin response (GSR, Chanel et al., 2009, 2011; 403
Soleymani et al., 2012), skin temperature (Chanel et 404
al., 2011; Soleymani et al., 2012), electrodermal activ- 405
ity (EDA, Frantzidis et al., 2010a), blood volume pulse 406
(BVP, Chanel et al., 2009, 2011), chest cavity expansion 407
(Chanel et al., 2009, 2011), electromyography (EMG, 408
Stikic et al., 2014), electrocardiography (ECG, Soley- 409
mani et al., 2012), video—recorded facial expressions 410
(Koelstra & Patras, 2013), mouse clicks and movements 411
(Azcarraga & Suarez, 2013), and stimulus characteris- 412
tics (Lin et al., 2014). 413
In a more limited number of studies (12%), data 414
acquisition was performed using functional near- 415
infrared spectroscopy (fNIRS, Heger et al., 2014; 416
Hosseini et al., 2011; Moghimi et al., 2012; Tai & 417
Chau, 2009). Both EEG and fNIRS are techniques that 418
rely on relatively portable devices. Few studies (16%) 419
used functional magnetic resonance imaging (fMRI) 420
data acquisition (Baucom et al., 2012; Kassam et al., 421
2013; Sitaram et al., 2011). 422
3.3.5. Emotion recognition techniques 423
Recognizing mental states from neurophysiologi- 424
cal signals generally requires three steps, namely data 425
pre-processing, feature extraction, and classification 426
methods (Kim et al., 2013). As pre-processing meth- 427
ods are relatively consistent across research groups, in 428
this review we focused on the differences in features 429
extraction, based on neurophysiological and neuropsy- 430
chological knowledge, and on emotion classification 431
methods, relying on machine learning and statistical 432
signal processing. 433
In the analyzed studies, the classification of dif- 434
ferent affective states was based on the extraction of 435
features belonging either to the time domain (Azcar- 436
raga & Suarez, 2013; Frantzidis et al., 2010a, 2010b; 437
Heger et al., 2014; Kashihara, 2014; Petrantonakis 438
& Hadjileontiadis, 2010; Tai & Chau, 2009), to the 439
frequency domain (Chanel et al., 2009, 2011; Hadjidim- 440
itriou & Hadjileontiadis, 2012; Hidalgo-Mu˜
noz et al., 441
2013; Koelstra & Patras, 2013; Lee & Hsieh, 2014; 442
Lin et al., 2014; Mikhail et al., 2013; Soleymani et 443
al., 2012; Stikic et al., 2014; Yoon & Chung, 2013; 444
Yuvaraj et al., 2014), or to the time-frequency domain 445
(Hadjidimitriou & Hadjileontiadis, 2013; Murugappan 446
et al., 2010; Murugappan et al., 2007). The rationale for 447
extracting mostly features from the frequency domain 448
Uncorrected Author Proof
12 G. Liberati et al. / Extracting neurophysiological signals reflecting
is that spectral power in the various frequency bands
449
is often associated to emotional phases (Jenke et al.,450
2014; Kim et al., 2013), and frontal asymmetry in the
451
band power appears to differentiate valence levels452
(Cacioppo, 2004). In particular, the left frontal lobe453
appears to be involved in the experience of positive454
valence, and the right frontal lobe in the experience
455
of negative valence (Crabbe, Smith, & Dishman, 2007;
456
Davidson, 1992; Fox, 1991; M¨
uller, Keil, Gruber, &
457
Elbert, 1999; Schmidt & Trainor, 2001).458
Several different classification methods were used
459
for emotion recognition, including linear discriminant
460
analysis (LDA, Chanel et al., 2009, 2011; Heger et al.,461
2014; Moghimi, Kushki, Guerguerian, & Chau, 2012;
462
Murugappan et al., 2010; Stikic et al., 2014; Tai &463
Chau, 2009), quadratic discriminant analysis (QDA,464
Chanel et al., 2009, 2011; Hadjidimitriou & Had-465
jileontiadis, 2012; Lee & Hsieh, 2014; Petrantonakis
466
& Hadjileontiadis, 2010; Stikic et al., 2014), support467
vector machines (SVM, Hadjidimitriou & Hadjileon-468
tiadis, 2012; Hidalgo-Mu˜
noz et al., 2013; Hosseini &
469
Naghibi-Sistani, 2011; Hosseini et al., 2011; Jie et
470
al., 2014; Kashihara, 2014; Koelstra & Patras, 2013;471
Lin et al., 2014; Mikhail et al., 2013; Sitaram et al.,472
2011; Soleymani et al., 2012), k-nearest neighbor (k-473
NN, Hadjidimitriou & Hadjileontiadis, 2013, 2012;474
Murugappan et al., 2010; Petrantonakis & Hadjileon-475
tiadis, 2010; Yuvaraj et al., 2014), and Mahalanobis
476
distance (MD, Frantzidis et al., 2010a, 2010b; Had-477
jidimitriou & Hadjileontiadis, 2012; Petrantonakis &
478
Hadjileontiadis, 2010). Whereas LDA and QDA per-
479
form dimensionality reduction from a high-dimensional480
feature space to a low-dimensional space in order to
481
maximize the Fischer discriminant ratio (McLachlan,
482
2004), SVMs determine a decision boundary in ker-483
nel space instead of the original feature space (Vapnik,484
Golowich, & Smola, 1997). The k-NN algorithm deter-485
mines the class of a new feature vector according
486
to the number of nearest vectors in the training set487
surrounding the vector (Altman, 1992). MD, widely488
used in clustering analysis, is a measure of the dis-
489
tance between a point and a distribution (Mahalanobis,490
1936). For a more in-depth overview of the different491
classification methods, the reader can refer to reviews492
on computational methods for mental state classifica-493
tion (Kim et al., 2013; Lemm, Blankertz, Dickhaus, &494
M¨
uller, 2011; Lotte, Congedo, L´
ecuyer, Lamarche, &495
Arnaldi, 2007).496
The studies included in this review often compare497
several classification procedures using different feature498
extraction methods and applying different classification499
algorithms within the same experiment. The different 500
procedures are summarized in Table 1. 501
3.3.6. Classification performance 502
Although the scientific community has suggested 503
that a classification accuracy of 70% can be consid- 504
ered as sufficient for BCI communication and device 505
control (K¨
ubler, Neumann, Wilhelm, Hinterberger, & 506
Birbaumer, 2004), no standard threshold has been 507
established in the context of aBCIs. Moreover, it is 508
very difficult to perform a comparison between results 509
from different studies, for several reasons. First, results 510
are not reported in a standardized way across research 511
articles, which sometimes indicate an average classifi- 512
cation performance, while other times only report the 513
highest performance. Second, the classified affective 514
states vary in number, type, and complexity across stud- 515
ies. Third, it is plausible that at the current state of the 516
art, numerous feature selection and classification meth- 517
ods are tested within the same study, but only the ones 518
leading to the best performance is chosen for presen- 519
tation in the paper, eventually leading to a publication 520
bias. Fourth, the studies rely on different and often very 521
small datasets. 522
In all the selected studies, the reported classification 523
accuracies were above chance level, suggesting that the 524
performance could be considered acceptable to a certain 525
extent. The real discriminative factor, however, is when 526
the recognition of affective state is performed. Only 527
in very few cases (9%), and within the same lab, the 528
recognition of affective states was performed in real- 529
time (Sitaram et al., 2011). In the rest of the studies, 530
emotion classification was performed offline, after the 531
end of the experiment. 532
4. Discussion 533
In the present review, we collected relevant informa- 534
tion from research articles investigating the recognition 535
of affective states from neurophysiological signals, to 536
evaluate the possibility of applying aBCIs in everyday 537
contexts, particularly with individuals using traditional 538
BCIs and other AT for cognitive support. 539
Overall, two different lines of research appear in 540
the aBCI field, one following a dimensional model 541
of emotions (Russell & Mehrabian, 1977), the other 542
based on categories often identified as “basic” or “uni- 543
versal” (Ekman, 1999). The dimensional approach is 544
based on a limited number of simple dimensions (max- 545
imum three, when dominance is included), which do 546
Uncorrected Author Proof
G. Liberati et al. / Extracting neurophysiological signals reflecting 13
not require an in-depth definition and are not culture-
547
dependent. Indeed, in the context of traditional BCIs548
for communication and environmental control, the pos-
549
sibility to obtain information on valence (e.g. how550
pleasant/unpleasant is the user’s interaction with the551
BCI or other technology) and his/her arousal (e.g. if552
he/she is nervous) could allow contingent modification
553
and improvement of the user-machine interaction. Nev-
554
ertheless, the use of a limited number of dimensions
555
might entail an over-simplification of affective states.556
First of all, there is high risk of associating very different
557
emotions that can share valence and arousal levels (e.g.
558
anger and fear). Second, this approach does not take into559
account that a same affective state may vary notably
560
in both valence and arousal. In spite of these limita-561
tions, the possibility of using this approach to have562
a feedback of the user’s experience in the interaction563
with a BCI, even if only to evaluate his/her acceptance
564
or appreciation of the technology, would be a remark-565
able and important step towards the development of566
user-centered adaptive BCIs.
567
The second line of research aims at identifying more
568
definite affective states. Ideally, the possibility to recog-569
nize a number of different emotions would remarkably570
improve the interaction with a BCI, e.g. by taking into571
account what makes a user angry, sad or happy. The572
downside of this approach is that the more categories573
are included in the classification, the more difficult is
574
their discrimination.575
Only a limited number of studies investigated
576
more complex emotions or task-related affective states
577
(Azcarraga & Suarez, 2013; Chanel et al., 2011). The578
fact that more complex emotions such as pride or shame
579
are less investigated is not surprising, as this represents a
580
more advanced step in affective computing, and because581
these affective states are strongly culture-dependent and582
not defined unequivocally. As observed by Brouwer583
and colleagues (2015), researchers investigating spe-
584
cific emotion categories should discuss how these states585
have been defined in previous studies, as well as which586
definition they are adhering to. Unfortunately, precise
587
definitions of the emotions under investigations are588
often lacking in the considered studies.589
The fact that task-related affective states, such as590
anxiety, frustration, boredom or interest, are hardly591
investigated, draws attention on the gap existing592
between the need of developing BCIs following a593
user-centered approach, and what is currently being594
evaluated in laboratory settings. Finding a way to rec-595
ognize task-related affective states would pave the way596
for considerably more usable and effective BCIs and597
ATs, but the research in this direction appears to be still 598
at an early stage. 599
In the analyzed studies, emotions were often elicited 600
using standardized stimuli, mostly from the visual or 601
audiovisual domain. In view of improving BCI expe- 602
rience, this choice can be considered useful, as these 603
modalities have an important role in the interaction with 604
computer technology. Stimuli from the IAPS or from 605
the IADS have the advantage of being well studied, and 606
are considered independent of culture, gender, and age, 607
so they are expected to evoke similar emotions across 608
subjects (Bradley & Lang, 1999; Lang et al., 1999). 609
One disadvantage of these stimuli, however, is that they 610
can recall different memories in different individuals, 611
meaning that what is being recognized in classifica- 612
tion studies is not always univocal. Moreover, brain 613
signals could be modulated by stimulation properties 614
irrelevant to emotion (Heger et al., 2014; Kim et al., 615
2013). As observed by Heger and collaborators (2014), 616
when analyzing affective reactions to emotional stim- 617
ulation based on neurophysiological activity, low-level 618
sensory processing may have an influence on the mea- 619
sured signals, even with highly controlled experimental 620
setups. M¨
uhl and colleagues (2011) also pointed out 621
that different induction modalities result in modality- 622
specific EEG responses. Some researchers have also 623
argued that these paradigms are too distant from an 624
ecological setting (Brouwer et al., 2015). For instance, 625
these is evidence suggesting that visual processing may 626
be quicker when the subject actively samples the envi- 627
ronment through eye movements, compared to when an 628
image is imposed at static eyes (Kamienkowski, Ison, 629
Quiroga, & Sigman, 2012). 630
To overcome the limitations of using artificial 631
stimuli, some researchers prefer asking subjects to 632
self-induce given emotions, for instance by voluntar- 633
ily recalling past episodes associated with these states 634
(Chanel et al., 2009; Kassam et al., 2013; Sitaram et al., 635
2011). These studies still have the limitation of being 636
circumscribed to lab settings, in which emotions are 637
detached from current everyday situations and are not 638
investigated in the context of a real task or activity. 639
Only a very limited number of studies required par- 640
ticipants to actively perform cognitive and/or motor 641
tasks with different levels of difficulty to elicit emotions 642
(Azcarraga et al., 2011; Chanel et al., 2009). Differ- 643
ently from lab experiments in which participants are 644
normally instructed to sit still and perform a single task, 645
users in real world situations will usually be in motion 646
and possibly multitasking (Van Erp, Lotte, & Tanger- 647
mann, 2012). More studies in this direction would be 648
Uncorrected Author Proof
14 G. Liberati et al. / Extracting neurophysiological signals reflecting
helpful for better recognizing and understanding affec-
649
tive states emerging when interacting with traditional650
BCIs and other technology.
651
Except for one study, to this date all studies were652
performed with healthy participants with no motor or653
cognitive impairment. The replication of these stud-654
ies on a patient population, as well as testing emotion
655
recognition of patients during BCI use, would therefore
656
be advisable.
657
Predictably, most studies were performed using658
portable techniques, such as EEG and fNIRS. EEG
659
appears to be the primary option for the development of
660
affective recognition systems, thanks to its low cost and661
simplicity of use, and because it reacts with very low
662
latency to changes of mental states (Balconi & Luc-663
chiari, 2006; Putze & Schultz, 2014). Moreover, new664
generation EEG systems such as the Emotiv EPOC665
headset do not require gel application and therefore
666
need less time to be set up, while approaching the per-667
formance of conventional wet system (Brouwer et al.,668
2015; Zander et al., 2011). In several cases, the acqui-
669
sition of EEG data was combined with the acquisition
670
of peripheral data such as respiration, ECK, EMG and671
eye—movements, paving the way to what has been672
defined as a “hybrid BCI” (Pfurtscheller et al., 2010).673
The main purpose of combining different physiologi-674
cal data acquisition systems is to improve classification675
accuracy and overcome the disadvantages of the indi-
676
vidual techniques. Whereas this strategy potentially677
leads to more robust results, it should be considered
678
that the addition of redundant or unreliable modalities
679
may be detrimental to classification accuracy (Putze &680
Schultz, 2014). Moreover, in clinical settings, BCIs are
681
often used with patients who gradually undergo a loss
682
of mobility, so it should be considered that relying on683
muscular and ocular movements progressively becomes684
unfeasible.685
More recently, fNIRS was introduced as an alterna-
686
tive input modality for BCIs (Strait & Scheutz, 2014).687
Albeit providing lower temporal resolution compared to688
EEG, it provides higher spatial resolution (Bießmann,
689
Plis, Meinecke, Eichele, & M¨
uller, 2011). Moreover,690
in comparison to EEG, fNIRS requires shorter setup691
time, and is not susceptible to electrical artifacts from692
environmental and physiological sources (Heger et al.,693
2014). Compared to fMRI, fNIRS is less expensive,694
portable, and more bearable for subjects (Cutini, Moro695
Basso, & Bisconti, 2012; Heger et al., 2014; Strait &696
Scheutz, 2014).697
A few studies used fMRI for emotion recognition. It698
may be surprising that a technique such as fMRI could699
be used in BCI contexts, as its applicability in everyday 700
life is not plausible due to its high cost and immobil- 701
ity. Nevertheless, fMRI has the advantage of measuring 702
responses from cortical and subcortical structures asso- 703
ciated to emotional processing and located deep in the 704
brain (Kim et al., 2013; Morgane, Galler, & Mokler, 705
2005; Vytal & Hamann, 2010). Hence, from a proof 706
of concept perspective, and to increase the knowledge 707
on emotion differentiation in the brain, fMRI studies 708
should still be considered useful. 709
The different studies varied greatly relative to the 710
kinds of selected neurophyiosiological features and 711
tested classifiers. On one side, this variety indicates that 712
there are numerous paths in the aBCI domain that can 713
be further explored, possibly leading to new solutions 714
and improvements. On the other side, the complexity of 715
this young research branch and the lack of unanimity 716
across working groups clearly indicate that the aBCI 717
field is still at its outset. 718
As observed by Brouwer and colleagues (2015), 719
several confounding factors may lead to overly opti- 720
mistic results in the classification of mental states. For 721
instance, classification accuracies can be inflated if the 722
data used to train the model is not independent of the 723
data used to test the trained model. To obtain an unbi- 724
ased estimate of the generalization performance of the 725
model, an additional data set independent from both 726
the training and the validation data is required, as in 727
a nested cross-validation scheme (Lemm et al., 2011). 728
The occurrence of this confounding factor in a specific 729
study is generally difficult to judge from the information 730
reported in the articles. The most reliable way to rec- 731
ognize a satisfactory BCI would be to evaluate whether 732
it works in real time, but almost the totality of works 733
analyzed in this review only presented offline results. 734
The most relevant outcome of the present review 735
is that, whereas all articles reported acceptable classi- 736
fication performances, the emotion classification was 737
generally either performed offline, or the adopted 738
methodology was not portable. These aspects pose seri- 739
ous limitations to the possibility to transfer BCI-based 740
affective state recognition to real life situations at the 741
current stage. 742
It is important to underline that the offline recognition 743
of affective states from neurophysiological signals can 744
still be very valuable, providing researchers, software 745
designers, and technicians with additional informa- 746
tion that can be used for improving the technology, 747
e.g. by measuring levels of stress and satisfaction 748
(Brouwer et al., 2015; M¨
uhl et al., 2014a). Com- 749
pared to self-reports and behavioral measurements, 750
Uncorrected Author Proof
G. Liberati et al. / Extracting neurophysiological signals reflecting 15
neurophysiological measurements allow detecting
751
affective states directly assessing the structures from752
which they emerge (Kim et al., 2013). Moreover, dif-
753
ferently from other channels such as facial expressions754
or speech, neurophysiological measurements cannot be755
feigned (Chanel, Kronegg, Grandjean, & Pun, 2006).756
On the other hand, it is mandatory to fill the gap
757
between laboratory experiments and everyday situa-
758
tions. In order to evaluate whether the different aBCI
759
studies, at the present stage, can lead to an evident appli-760
cation in the field of cognitive support technologies
761
in general, and more specifically with traditional BCI,
762
we decided to consider a combination of three factors,763
namely (i) classification performance, (ii) device porta-
764
bility, and (iii) possibility to recognize emotional states765
in real time. Importantly, these criteria do not represent766
an evaluation of the quality of each study per se.Nev-767
ertheless, they are a basic requirement for allowing the
768
use of an aBCI in everyday life. As presented in the769
Results section, all studies reported an above chance770
classification accuracy, so we focused on the second
771
and third factors. We observed that none of the stud-
772
ies using EEG or fNIRS signals, albeit relying on a773
portable technique that could be possibly used in home774
environments, used real-time emotion classification. In775
fact, the only studies that reported online classification776
were done with fMRI (Sitaram et al., 2011), which is777
not portable. Overall, although the results obtained in
778
all studies seem encouraging, none of the applied meth-779
ods are at the current stage ready to be used in everyday
780
life, neither with healthy nor patient populations.
781
4.1. Limitations of the review782
In the present systematic review, we did not include783
conference proceedings. We believe that at least some784
of these studies are of high quality and undoubtedly785
promising, but as they did not receive strict peer-review,786
and because important methodological details are often
787
missing, they did not fit the scope of this work. These788
conference articles are numerous, also thanks to a grow-
789
ing community organizing meetings and workshops on
790
aBCIs (Brouwer et al., 2015; M¨
uhl et al., 2014a; Nijholt
791
& Heylen, 2013), and are a demonstration of the fact792
that the aBCI field is in constant evolution.793
Despite our effort in making this review compre-794
hensive, we cannot exclude that relevant studies may
795
have been dismissed, mostly due to a lack of com-796
mon terminology. In fact, we observed that a high797
number of research groups do not use the terms
798
“affective brain–computer interface” or “emotion clas-
799
sification”, despite extracting information of subjects’ 800
affective states from neurophysiological signals and 801
using machine learning techniques. This suggests that 802
some works may neither have emerged from the 803
research, nor have been referenced in other articles in 804
the field. This is motivated by the fact that the field is 805
highly multidisciplinary, including engineers, psychol- 806
ogists and physicians, and therefore, at this stage of 807
early development, still builds on different grounds. 808
4.2. Conclusions 809
The ability of a BCI to recognize a user’s affective 810
state is an added value, given that emotions provide 811
useful information for the adaptation of the BCI to the 812
user’s needs. 813
The aBCI field is in constant development, with a 814
high variety of combinations of neuroimaging tech- 815
niques, selection of neurophysiological features, and 816
classification algorithms currently being tested and 817
improved. Nevertheless, the gap between what is tested 818
in laboratory settings and the translation of the findings 819
to everyday situations and during specific tasks, includ- 820
ing the application with patient populations, still needs 821
to be filled. 822
In particular, BCI developers should focus on the fol- 823
lowing aspects: a) testing affective state recognition 824
in more ecological settings, during the performance 825
of realistic motor and/or cognitive tasks; b) evaluating 826
classification accuracy in real time, as this is the only 827
way the system can re-adapt to the a user’s internal state; 828
c) perform assessments with patient populations, which 829
are the ones who would most benefit from enhanced 830
BCI system; d) provide more accurate and precise def- 831
initions of which affective state is being measured; e) 832
disseminate results in a more transparent and standard- 833
ized way. 834
Conflict of interest 835
The authors have no competing interest to declare. 836
References 837
Altman, N. S. (1992). An introduction to kernel and nearest–neighbor 838
nonparametric regression. The American Statistician,46(3), 175- 839
185. 840
Azcarraga, J., & Suarez, M. T. (2013). Recognizing student emotions 841
using brainwaves and mouse behavior data. International Journal 842
of Distance Education Technologies,11(2), 1-15. 843
Uncorrected Author Proof
16 G. Liberati et al. / Extracting neurophysiological signals reflecting
Balconi, M., & Lucchiari, C. (2006). EEG correlates (event–related
844
desynchronization) of emotional face elaboration: A temporal845
analysis. Neuroscience Letters,392(1-2).846
Baucom, L. B., Wedell, D. H., Wang, J., Blitzer, D. N., & Shinkareva,847
S. V. (2012). Decoding the neural representation of affective
848
states. NeuroImage,59(1), 718-727.849
Bießmann, F., Plis, S., Meinecke, F. C., Eichele, T., & M¨
uller, K.
850
–R. (2011). Analysis of multimodal neuroimaging data. IEEE851
Reviews in Biomedical Engineering,4, 26-58.852
Birbaumer, N., Weber,C., Neuper, C., Buch, E., Haagen, K., & Cohen,853
L. (2006). Physiological regulation of thinking: Brain–computer
854
interface (BCI) research. Progress in Brain Research,159, 369-855
391.856
Bradley, M. M., & Lang, P. J. (1999). International affective digitized857
sounds (IADS): Stimuli, instruction manual and affective ratings.858
Gainesville: University of Florida.859
Brouwer, A. –M., Zander, T. O., van Erp, J. B. F., Korteling, J. E., &860
Bronkhorst, A. W. (2015). Using neurophysiological signals that861
reflect cognitive or affective state: Six recommendations to avoid862
common pitfalls. Frontiers in Neuroscience,9, 136.863
Buck, R. (1980). Nonverbal behavior and the theory of emotion:864
The facial feedback hypothesis. Journal of Personality and Social
865
Psychology,38(5), 811-24.866
Cacioppo, J. T. (2004). Feelings and emotions: Roles for electrophys-867
iological markers. Biological Psychology,67(1), 235-243.868
Chanel, G., Kierkels, J. J., Soleymani, M., & Pun, T. (2009).869
Short–term emotion assessment in a recall paradigm. Interna-870
tional Journal of HumanComputer Studies,67(8), 607-627.
871
Chanel, G., Kronegg, J., Grandjean, D., & Pun, T.(2006). Multimedia
872
content representation, classification and security (pp. 530-537).873
Springer.874
Chanel, G., Rebetez, C., B´
etrancourt, M., & Pun, T. (2011). Emotion875
assessment from physiological signals for adaptation of game876
difficulty. Systems, Man and Cybernetics, Part A: Systems and877
Humans, IEEE Transactions on,41(6), 1052-1063.878
Chu, Y., Brown, P., Harniss, M., Kautz, H., & Johnson, K. (2014).879
Cognitive support technologies for people with TBI: Current880
usage and challenges experienced. Disability and Rehabilitation.881
Assistive Technology,9(4), 279-285.882
Crabbe, J. B., Smith, J. C., & Dishman, R. K. (2007). Emotional &883
electroencephalographic responses during affective picture view-
884
ing after exercise. Physiology & Behavior,90(2-3), 394-404.885
Cutini, S., Moro Basso, S., & Bisconti, S. (2012). Functional near886
infrared optical imaging in cognitive neuroscience: An introduc-
887
tory review. J Near Infrared Spectrosc,20, 75-92.888
Davidson, R. J. (1992). Anterior cerebral asymmetry and the nature
889
of emotion. Brain and Cognition,20(1), 125-151.890
Ekman, P. (1999). Basic emotions. Handbook of Cognition and Emo-891
tion,4, 5-60.892
Ekman, P., Friesen, W. V., O’Sullivan, M., Chan, A.,893
Diacoyanni–Tarlatzis, I., Heider, K., Krause, W. A., LeCompte,894
T., Pitcairn, T., & Ricci–Bitti, P. E. (1987). Universals and895
cultural differences in the judgments of facial expressions of896
emotion. Journal of Personality and Social Psychology,53(4),897
712-717.898
Federici, S., & Borsci, S. (2014). Providing assistive technology in899
italy: The perceived delivery process quality as affecting aban-
900
donment. Disability and Rehabilitation. Assistive Technology,901
1-10.
902
Federici, S., & Scherer, M. (2012). Assistive technology assessment903
handbook. CRC Press.904
Fox, N. A. (1991). If it’s not left, it’s right: Electroencephalograph 905
asymmetry and the development of emotion. American Psychol- 906
ogist,46(8), 863. 907
Frantzidis, C. A., Bratsas, C., Klados, M. A., Konstantinidis, E., 908
Lithari, C. D., Vivas, A. B., Papadelis, C. L., Kaldoudi, E., Pappas, 909
C., & Bamidis, P. D. (2010a). On the classification of emotional 910
biosignals evoked while viewing affective pictures: An integrated 911
data–mining–based approach for healthcare applications. IEEE 912
Transactions on Information Technology in Biomedicine,14(2), 913
309-318. 914
Frantzidis, C. A., Bratsas, C., Papadelis, C. L., Konstantinidis, E., 915
Pappas, C., & Bamidis, P. D. (2010b). Toward emotion aware 916
computing: An integrated approach using multichannel neu- 917
rophysiological recordings and affective visual stimuli. IEEE 918
Transactions on Information Technology in Biomedicine,14(3), 919
589-597. 920
Hadjidimitriou, S. K., & Hadjileontiadis, L. J. (2012). Toward an 921
EEG–based recognition of music liking using time–frequency 922
analysis. IEEE Transactions on Biomedical Engineering,923
59(12), 3498-3510. 924
Hadjidimitriou, S., & Hadjileontiadis, L. (2013). EEG–Based clas- 925
sification of music appraisal responses using time–frequency 926
analysis and familiarity ratings. IEEE Transactions on Affective 927
Computing,4(2), 161-172. 928
Hadjidimitriou, S. K., & Hadjileontiadis, L. J. (2012). Toward an 929
EEG–based recognition of music liking using time–frequency 930
analysis. IEEE Transactions on Biomedical Engineering,931
59(12), 3498-3510. 932
Haselager, P. (2013). Did I do that? Brain–computer interfacing and 933
the sense of agency. Minds and Machines,23(3), 405-418. 934
Heger, D., Herff, C., Putze, F., Mutter, R., & Schultz, T. (2014). Con- 935
tinuous affective states recognition using functional near infrared 936
spectroscopy. BrainComputer Interfaces,1(2), 113-125. 937
Hidalgo–Mu˜
noz, A. R., L´
opez, M. M., Pereira, A. T., Santos, I. M., 938
&Tom
´
e, A. M. (2013). Spectral turbulence measuring as feature 939
extraction method from EEG on affective computing. Biomedical 940
Signal Processing and Control,8(6), 945-950. 941
Holz, E. M., Botrel, L., Kaufmann, T., & K¨
ubler, A. (2015). 942
Long–term independent brain–computer interface home use 943
improves quality of life of a patient in the locked–in state: A 944
case study. Archives of Physical Medicine and Rehabilitation,945
96(3 Suppl), S16-26. 946
Hosseini, S. A., & Naghibi–Sistani, M. B. (2011). Emotion recogni- 947
tion method using entropy analysis of EEG signals. International 948
Journal of Image, Graphics and Signal Processing,3(5), 30. 949
Hosseini, S. M. H., Mano, Y., Rostami, M., Takahashi, M., Sugiura, 950
M., & Kawashima, R. (2011). Decoding what one likes or dislikes 951
from single–trial fNIRS measurements. Neuroreport,22(6), 269- 952
273. 953
Jenke, R., Peer, A., & Buss, M. (2014). Feature extraction and selec- 954
tion for emotion recognition from EEG. IEEE Transactions on 955
Affective Computing,5(3), 327-339. 956
Jie, X., Cao, R., & Li, L. (2014). Emotion recognition based 957
on the sample entropy of EEG. Biomed Mater Eng,24(1), 958
1185-1192. 959
Kamienkowski, J. E., Ison, M. J., Quiroga, R. Q., & Sigman, M. 960
(2012). Fixation–related potentials in visual search: A combined 961
EEG and eye tracking study. Journal of Vision,12(7), 4. 962
Kashihara, K. (2014). A brain–computer interface for potential 963
non–verbal facial communication based on EEG signals related 964
to specific emotions. Frontiers in Neuroscience,8, 244. 965
Uncorrected Author Proof
G. Liberati et al. / Extracting neurophysiological signals reflecting 17
Kassam, K. S., Markey, A. R., Cherkassky, V. L., Loewenstein, G.,
966
& Just, M. A. (2013). Identifying emotions on the basis of neural967
activation. PloS One,8(6), e66032.968
Kim, M. -K., Kim, M., Oh, E., & Kim, S.–P. (2013). A review on969
the computational methods for emotional state estimation from
970
the human EEG. Computational and Mathematical Methods in971
Medicine,2013, 573734.
972
Koelstra, S., & Patras, I. (2013). Fusion of facial expressions and973
EEG for implicit affective tagging. Image and Vision Computing,974
31(2), 164-174.975
K¨
ubler, A., Holz, E. M., Riccio, A., Zickler, C., Kaufmann, T., Kleih,
976
S. C., Staiger–S¨
alzer, P., Desideri, L., Hoogerwerf, E. J., & Mat-977
tia, D. (2014). The user–centered design as novel perspective for978
evaluatingthe usability of BCI–controlled applications. PloS One,979
9(12), e112392.980
K¨
ubler, A., Neumann, N., Wilhelm, B., Hinterberger, T., &981
Birbaumer, N. (2004). Predictability of brain–computer commu-982
nication. Journal of Psychophysiology,18(2/3), 121-129.983
Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (1999). International984
affective picture system (IAPS): Instruction manual and affective985
ratings. The Center for Research in Psychophysiology, University986
of Florida.
987
Lee, Y. –Y., & Hsieh, S. (2014). Classifying different emotional states988
by means of EEG–based functional connectivity patterns. PloS989
One,9(4), e95415.990
Lemm, S., Blankertz, B., Dickhaus, T., & M¨
uller, K. -R. (2011).991
Introduction to machine learning for brain imaging. NeuroImage,992
56(2), 387-399.
993
Liberati, A., Altman, D. G., Tetzlaff, J., Mulrow, C., Gøtzsche, P.
994
C., Ioannidis, J. P., Clarke, M., Devereaux, P. J., Kleijnen, J., &995
Moher, D. (2009). The PRISMA statement for reporting system-996
atic reviews and meta–analyses of studies that evaluate health care997
interventions: Explanation and elaboration. Annals of Internal998
Medicine,151(4), W-65.999
Liberati, G., Dalboni da Rocha, J., Veit, R., von Arnim, C., Jenner,1000
A., Lul´
e, D., Ludolph, A. C., Raffone, A., Olivetti Belardinelli,1001
M., & Sitaram, R. (2013, September). Development of a binary1002
fMRI–BCI for Alzheimer patients. A semantic conditioning1003
paradigm using affectiveunconditioned stimuli. In Affective Com-1004
puting and Intelligent Interaction, pp. 838-842.1005
Liberati, G., Dalboni da Rocha, J. L., van der Heiden, L., Raf-
1006
fone, A., Birbaumer, N., Olivetti Belardinelli, M., & Sitaram, R.1007
(2012). Toward a brain–computer interface for Alzheimer’s dis-1008
ease patients by combining classical conditioning and brain state
1009
classification. Journal of Alzheimer’s Disease,29, 1-10.1010
Liberati, G., Pizzimenti, A., Simione, L., Riccio, A., Schettini,
1011
F., Inghilleri, M., Mattia, D., & Cincotti, F. (2015). Develop-1012
ing brain–computer interfaces from a user–centered perspective:1013
Assessing the needs of persons with amyotrophic lateral sclerosis,1014
caregivers, and professionals. Applied Ergonomics,50, 139-46.1015
Liberati, G., Veit, R., Dalboni da Rocha, J., Kim, S., Lul´
e,1016
D., von Arnim, C., Raffone, A., Olivetti Belardinelli, M.,1017
Birbaumer, N., & Sitaram, R. (2012). Combining classical con-1018
ditioning and brain–state classification for the development1019
of a brain–computer interface (BCI) for Alzheimer’s patients.1020
Alzheimer’s and Dementia,8(4), P515.1021
Lin, Y.–P., Yang, Y.–H., & Jung, T.–P. (2014). Fusion of electroen-
1022
cephalographic dynamics and musical contents for estimating1023
emotional responses in music listening. Frontiers in Neuro-
1024
science,8, 94.1025
Lotte, F., Congedo, M., L´
ecuyer, A., Lamarche, F., & Arnaldi, B. 1026
(2007). A review of classification algorithms for EEG–based 1027
brain–computer interfaces. Journal of Neural Engineering,4(2), 1028
R1-R13. 1029
Mahalanobis, P. C. (1936). On the generalized distance in statistics. 1030
Proceedings of the National Institute of Sciences,2, 49-55. 1031
McLachlan, G. (2004). Discriminant analysis and statistical pattern 1032
recognition (Vol. 544). John Wiley & Sons. 1033
Mikhail, M., El–Ayat, K., Coan, J. A., & Allen, J. J. (2013). Using 1034
minimal number of electrodes for emotion detection using brain 1035
signals produced from a new elicitation technique. International 1036
Journal of Autonomous and Adaptive Communications Systems,1037
6(1), 80-97. 1038
Moghimi, S., Kushki, A., Guerguerian, A. M., & Chau, T. (2012). 1039
Characterizing emotional response to music in the prefrontal 1040
cortex using near infrared spectroscopy. Neuroscience Letters,1041
525(1), 7-11. 1042
Moghimi, S., Kushki, A., Power, S., Guerguerian, A. M., & Chau, 1043
T. (2012). Automatic detection of a prefrontal cortical response 1044
to emotionally rated music using multi–channel near–infrared 1045
spectroscopy. Journal of Neural Engineering,9(2), 026022. 1046
Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred 1047
reporting items for systematic reviews and meta–analyses: The 1048
PRISMA statement. Annals of Internal Medicine,151(4), 264- 1049
269. 1050
Morgane, P. J., Galler, J. R., & Mokler, D. J. (2005). A review of 1051
systems and networks of the limbic forebrain/limbic midbrain. 1052
Progress in Neurobiology,75(2), 143-60. 1053
Murugappan, M., Ramachandran, N., & Sazali, Y. (2010). Classi- 1054
fication of human emotion from EEG using discrete wavelet 1055
transform. Journal of Biomedical Science and Engineering,3(4), 1056
390-396. 1057
Murugappan, M., Rizon, M., Nagarajan, R., Yaacob, S., Zunaidi, I., & 1058
Hazry, D. (2007). EEG feature extractionfor classifying emotions 1059
using FCM and FKM. International Journal of Computers and 1060
Communications,1(2), 21-25. 1061
M¨
uhl, C., Allison, B., Nijholt, A., & Chanel, G. (2014a). 1062
Affective brain–computer interfaces: Special issue editorial. 1063
BrainComputer Interfaces,1(2), 63-65. 1064
M¨
uhl, C., Allison, B., Nijholt, A., & Chanel, G. (2014b). A survey of 1065
affective brain computer interfaces: Principles, state–of–the–art, 1066
and challenges. BrainComputer Interfaces,1(2), 66-84. 1067
M¨
uhl, C., Brouwer, A.-M., van Wouwe, N., van den Broek, 1068
E. L., Nijboer, F., & Heylen, D. K. J. (2011, September). 1069
Modality–specific affective responses and their implications 1070
for affective BCI. Proceedings of the Fifth International 1071
Brain–Computer Interface Conference 2011, pp. 120-123. 1072
M¨
uhl, C., Heylen, D., & Nijholt, A. (2014). 15 affective 1073
brain–computer interfaces: Neuroscientific approaches to affect 1074
detection. The Oxford Handbook of Affective Computing, 217. 1075
M¨
uller, M. M., Keil, A., Gruber, T., & Elbert, T. (1999). 1076
Processing of affective pictures modulates right–hemispheric 1077
gamma band EEG activity. Clinical Neurophysiology,110(11), 1078
1913-1920. 1079
Nicolas–Alonso, L. F., & Gomez–Gil, J. (2012). Brain computer 1080
interfaces, a review. Sensors,12(2), 1211-1279. 1081
Nijboer, F., Birbaumer, N., & K¨
ubler, A. (2010). The influence of 1082
psychological state and motivation on brain–computer interface 1083
performance in patients with amyotrophic lateral sclerosis – a 1084
longitudinal study. Frontiers in Neuroscience,4(55). 1085
Uncorrected Author Proof
18 G. Liberati et al. / Extracting neurophysiological signals reflecting
Nijboer, F., Plass–Oude Bos, D., Blokland, Y., van Wijk, R., & Far-
1086
quhar, J. (2014). Design requirements and potential target users1087
for brain–computer interfaces––recommendations from rehabili-1088
tation professionals. BrainComputer Interfaces,1(1), 50-61.1089
Nijholt, A., & Heylen, D. K. J. (2013). Editorial (to: Special issue
1090
on affective brain–computer interfaces). BrainComputer Inter-1091
faces,1(1), 63-65.
1092
Pasqualotto, E., Matuz, T., Federici, S., Ruf, C. A., Bartl, M., Olivetti1093
Belardinelli, M., Birbaumer, N., & Halder, S. (2015). Usability1094
and workload of access technology for people with severe motor1095
impairment: A comparison of brain–computer interfacing and eye
1096
tracking. Neurorehabilitation and Neural Repair (in press).1097
Pasqualotto, E., Simonetta, A., Federici, S., & Olivetti Belardinelli,1098
M. (2009, September). Usability evaluation of BCIs. In Assistive1099
Technology from Adapted Equipment to Inclusive Environments1100
AAATE 2009.1101
Petrantonakis, P. C., & Hadjileontiadis, L. J. (2010). Emotion recogni-1102
tion from brain signals using hybrid adaptive filtering and higher1103
order crossings analysis. Affective Computing, IEEE Transactions1104
on,1(2), 81-97.1105
Pfurtscheller, G., Allison, B. Z., Brunner, C., Bauernfeind, G.,1106
Solis–Escalante, T., Scherer, R., Zander, T. O., Mueller–Putz, G.,
1107
Neuper, & Birbaumer, N. (2010). The hybrid BCI. Frontiers in1108
Neuroscience,4.1109
Picard, R. W. (2000). Affective computing. MIT press.1110
Putze, F., & Schultz, T. (2014). Adaptive cognitive technical systems.1111
Journal of Neuroscience Methods,234, 108-115.1112
Reuderink, B., Poel, M., & Nijholt, A. (2011). The impact of loss of
1113
control on movement BCIs. IEEE Transactions on Neural Sys-
1114
tems and Rehabilitation Engineering: A Publication of the IEEE1115
Engineering in Medicine and Biology Society,19(6), 628-637.1116
Russell, J. A., & Mehrabian, A. (1977). Evidence for a three–factor1117
theory of emotions. Journal of Research in Personality,11(3),1118
273-294.1119
Schettini, F.,Riccio, A., Simione, L., Liberati, G., Caruso, M., Frasca,1120
V., Calabrese, B., Mecella, M., Pizzimenti, A., Inghilleri, M., Mat-1121
tia, D., & Cincotti, F. (2015). Assistive device with conventional,1122
alternative, and brain–computer interface inputs to enhance inter-1123
action with the environment for people with amyotrophic lateral1124
sclerosis: A feasibility and usability study. Archives of Physical1125
Medicine and Rehabilitation,96(3 Suppl), S46-S53.
1126
Schmidt, L. A., & Trainor,L. J. (2001). Frontal brain electrical activity1127
(EEG) distinguishes valence and intensity of musical emotions.1128
Cognition & Emotion,15(4), 487-500.
1129
Simon, N., K¨
athner, I., Ruf, C. A., Pasqualotto, E., K¨
ubler, A.,1130
& Halder, S. (2014). An auditory multiclass brain–computer
1131
interface with natural stimuli: Usability evaluation with healthy1132
participants and a motor impaired end user. Frontiers in Human1133
Neuroscience,8, 1039.
Sitaram, R., Lee, S., Ruiz, S., Rana, M., Veit, R., & Birbaumer, N. 1134
(2011). Real–time support vector classification and feedback of 1135
multiple emotional brain states. NeuroImage,56(2), 753-765. 1136
Soleymani, M., Pantic, M., & Pun, T. (2012). Multimodal emotion 1137
recognition in response to videos. Affective Computing, IEEE 1138
Transactions on,3(2), 211-223. 1139
Stikic, M., Johnson, R. R., Tan, V., & Berka, C. (2014). 1140
EEG–based classification of positive and negative affective states. 1141
BrainComputer Interfaces,1(2), 99-112. 1142
Strait, M., & Scheutz, M. (2014). What we can and cannot (yet) do 1143
with functional near infrared spectroscopy. Frontiers in Neuro- 1144
science,8, 117. 1145
Tai, K., & Chau, T. (2009). Single–trial classification of NIRS signals 1146
during emotional induction tasks: Towards a corporeal machine 1147
interface. Journal of Neuroengineering and Rehabilitation,6, 39. 1148
Van Erp, J. B., Brouwer, A.-M., & Zander, T. O. (2015). Introduc- 1149
tion to using neurophysiological signals that reflect cognitive or 1150
affective state. Frontiers in Neuroscience,9, 193. 1151
Van Erp, J. B., Lotte, F., & Tangermann, M. (2012). Brain–computer 1152
interfaces: Beyond medical applications. Computer, (4), 26-34. 1153
Vapnik, V., Golowich, S., & Smola, A. (1997). Support vector method 1154
for function approximation, regressionestimation, and signal pro- 1155
cessing (M. C. Mozer, M. I. Jordan, T. Petsche (eds.), Advanced 1156
in Neural Information Processing Systems ed.). Cambridge, MA: 1157
MIT Press. 1158
Vytal, K., & Hamann, S. (2010). Neuroimaging support for 1159
discrete neural correlates of basic emotions: A voxel–based 1160
meta–analysis. Journal of Cognitive Neuroscience,22(12), 2864- 1161
2885. 1162
Yoon, H. J., & Chung, S. Y. (2013). EEG–based emotion estimation 1163
using bayesian weighted–log–posterior function and perceptron 1164
convergence algorithm. Computers in Biology and Medicine,1165
43(12), 2230-2237. 1166
Yuvaraj, R., Murugappan, M., Ibrahim, N. M., Omar, M. I., Sundaraj, 1167
K., Mohamad, K., Palaniappan, R., & Satiyan, M. (2014). Emo- 1168
tion classification in Parkinson’s disease by higher–order spectra 1169
and power spectrum features using EEG signals: A comparative 1170
study. Journal of Integrative Neuroscience,13(1), 89-120. 1171
Zander, T. O., Lehne, M., Ihme, K., Jatzev, S., Correia, J., Kothe, 1172
C., Picht, B., & Nijboer, F. (2011). A dry EEG–system for 1173
scientific research and brain–computer interfaces. Frontiers in 1174
Neuroscience,5, 53. 1175
Zickler, C., Halder, S., Kleih, S. C., Herbert, C., & K¨
ubler, A. (2013). 1176
Brain painting: Usability testing according to the user–centered 1177
design in end users with severe motor paralysis. Artificial Intelli- 1178
gence in Medicine,59(2), 99-110. 1179
... The dimensional model provides continuity as it quantifies emotions on each dimension (valence ranging from positive to negative, arousal ranging from calm to excited and dominance ranging from in-control to submission). Particularly, the 2D model has been previously used in multiple EEG studies (Liberati et al., 2015;Al-Nafjan et al., 2017;Mohammadi et al., 2017), however in these studies, the dimensionality is often simplified again by means of clustering emotions across the valence-arousal coordinates (e.g., fear as negative valence, high arousal or happiness as positive valence, high arousal), which bears the risk of grouping different emotions that share the same valence and arousal levels (e.g., anger and fear) in one cluster (Liberati et al., 2015). ...
... The dimensional model provides continuity as it quantifies emotions on each dimension (valence ranging from positive to negative, arousal ranging from calm to excited and dominance ranging from in-control to submission). Particularly, the 2D model has been previously used in multiple EEG studies (Liberati et al., 2015;Al-Nafjan et al., 2017;Mohammadi et al., 2017), however in these studies, the dimensionality is often simplified again by means of clustering emotions across the valence-arousal coordinates (e.g., fear as negative valence, high arousal or happiness as positive valence, high arousal), which bears the risk of grouping different emotions that share the same valence and arousal levels (e.g., anger and fear) in one cluster (Liberati et al., 2015). ...
Article
Full-text available
Brain-computer interfaces (BCIs) have long been seen as control interfaces that translate changes in brain activity, produced either by means of a volitional modulation or in response to an external stimulation. However, recent trends in the BCI and neurofeedback research highlight passive monitoring of a user's brain activity in order to estimate cognitive load, attention level, perceived errors and emotions. Extraction of such higher order information from brain signals is seen as a gateway for facilitation of interaction between humans and intelligent systems. Particularly in the field of robotics, passive BCIs provide a promising channel for prediction of user's cognitive and affective state for development of a user-adaptive interaction. In this paper, we first illustrate the state of the art in passive BCI technology and then provide examples of BCI employment in human-robot interaction (HRI). We finally discuss the prospects and challenges in integration of passive BCIs in socially demanding HRI settings. This work intends to inform HRI community of the opportunities offered by passive BCI systems for enhancement of human-robot interaction while recognizing potential pitfalls.
... This systematic review was performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 10 and was registered in the International Prospective Register of Systematic Reviews-PROSPERO CRD42019110575. ...
Article
The objective of this systematic review is to assess the risk of postoperative bleeding in oral surgery for implant placement in individuals taking antithrombotics (i.e., anticoagulants and/or antiplatelet agents). A literature search was performed in PubMed (MEDLINE), Web of Science, Scopus, and EMBASE databases for articles published until August 2020, with no date restriction, and manually completed. We included prospective clinical studies that provided information regarding the presence of an experimental group (i.e., implant placement), a control group (patients not under treatment with antithrombotics), and a well-established protocol for evaluating bleeding. Meta-analysis determined the risk of bleeding during the placement of implants in antithrombotic-treated patients. Of the 756 potentially eligible articles, 5 were included in the analysis with 4 ranked as high and 1 as medium quality. Antithrombotic treatment comprised the following drug classes: (1) anticoagulants: vitamin K antagonists, (2) nonvitamin K antagonist oral anticoagulants, (3) low-molecular-weight heparin, and (4) antiplatelet agents (not specified). The results suggest that the risk of bleeding is not substantially higher in antithrombotic-treated patients (odds ratio = 2.19; 95% confidence interval: 0.88–5.44, p = 0.09) compared with nontreated patients. This systematic review suggests that the absolute risk is low and there is no need to discontinue or alter the dose of the antithrombotic treatment for implant placement surgery.
... Valence/Arousal (VA) (Russell, 1980) Dimensional Emotions represented over a two-dimensional circular space: axes describe the valence and arousal Pleasure/Arousal/Dominance (PAD) (Mehrabian, 1996) to an "overlap" of different emotions that share similar values of features (Liberati et al., 2015). For this reason, the choice of informative and possibly uncorrelated dimensions is critical (Trnka et al., 2016). ...
Article
Full-text available
A fascinating challenge in the field of human–robot interaction is the possibility to endow robots with emotional intelligence in order to make the interaction more intuitive, genuine, and natural. To achieve this, a critical point is the capability of the robot to infer and interpret human emotions. Emotion recognition has been widely explored in the broader fields of human–machine interaction and affective computing. Here, we report recent advances in emotion recognition, with particular regard to the human–robot interaction context. Our aim is to review the state of the art of currently adopted emotional models, interaction modalities, and classification strategies and offer our point of view on future developments and critical issues. We focus on facial expressions, body poses and kinematics, voice, brain activity, and peripheral physiological responses, also providing a list of available datasets containing data from these modalities.
... It is concluded that the KAPEAN setting enabled the researchers to better interpret a number of affective states that a child with ADHD may exhibit when undertaking a task. Liberati et al. (2015), state that affective BCI field is in constant development combining a wide variation of neuroimaging techniques to recognize a subject's affective state, concluding that the gap between what is tested in laboratory settings and the translation of the findings to everyday situations, still needs to be filled. Schoneveld et al. (2016) evaluated a game developed on evidence-based practices for reducing children's anxiety and proposed to maximize dedication, emotional strength and commitment. ...
Article
Full-text available
The aim of this article is to explore a paradigm shift on Brain Computer Interface (BCI) research, as well as on intervention best practices for training and rehabilitation of students with neurodevelopmental disorders. Recent studies indicate that BCI devices have positive impact on students' attention skills and working memory as well as on other skills, such as visuospatial, social, imaginative and emotional abilities. BCI applications aim to emulate humans' brain and address the appropriate understanding for each student's neurodevelopmental disorders. Studies conducted to provide knowledge about BCI-based intervention applications regarding memory, attention, visuospatial, learning, collaboration, and communication, social, creative and emotional skills are highlighted. Only non-invasive BCI type of applications are being investigated based upon representative, non-exhaustive and state-of-the-art studies within the field. This article examines the progress of BCI research so far, while different BCI paradigms are investigated. BCI-based applications could successfully regulate students' cognitive abilities when used for their training and rehabilitation. Future directions to investigate BCI-based applications for training and rehabilitation of students with neurodevelopmental disorders concerning the different populations involved are discussed.
... BCI is very flexible and can be associated with many other modalities. Consequently, new terminologies are introduced such as ''hybrid-BCI'' which is the acquisition of EEG data combined with physiological signals such as Electromyography (EMG), Electrocardiography (ECG)… (Liberati et al. 2015) to overcome the shortages of individual acquisitions and build more robust classifiers. Another field associates the user's affective states with BCI schemes. ...
Article
Full-text available
The purpose of this work is to set up a model that can estimate the mental fatigue of users based on the fusion of relevant features extracted from Positive 300 (P300) and steady state visual evoked potentials (SSVEP) measured by electroencephalogram. To this end, an experimental protocol describes the induction of P300, SSVEP and mental workload (which leads to mental fatigue by varying time-on-task) in different scenarios where environmental artifacts are controlled (obstacles number, obstacles velocities, ambient luminosity). Ten subjects took part in the experiment (with two suffering from cerebral palsy). Their mission is to navigate along a corridor from a starting point A to a goal point B where specific flickering stimuli are introduced to perform the P300 task. On the other hand, SSVEP task is elicited thanks to 10 Hz flickering lights. Correlated features are considered as inputs to fusion block which estimates mental workload. In order to deal with uncertainties and heterogeneity of P300 and SSVEP features, Dempster–Shafer (D–S) evidential reasoning is introduced. As the goal is to assess the reliability for the estimation of mental fatigue levels, D–S is compared to multi layer perception and linear discriminant analysis. The results show that D–S globally outperforms the other classifiers (although its performance significantly decreases between healthy and palsied groups). Finally we discuss the feasibility of such a fusion proposal in real life situation.
Chapter
Artificial intelligence coupled with digitally connected technologies are becoming more self-evident. These developments indicate an increasing symbiosis between human and machine, referring to a new phase of interaction—symbiotic intelligence. In this vein, the human-centred development of technologies is becoming more and more important. The detection of user’s mental states, such as cognitive processes, emotional or affective reactions, offers great potential for the development of intelligent and interactive machines. Neurophysiological signals provide the basis to estimate many facets of subtle mental user states, like attention, affect, cognitive workload and many more. This has led to extensive progress in brain-based interactions—Brain-Computer Interfaces (BCIs). While most BCI research aims at designing assistive, supportive or restorative systems for severely disabled persons, the current discussion focuses on neuroadaptive control paradigms using BCIs as a strategy to make technologies more human-centred and also usable for non-medical applications. The primary goal of our neuroadaptive technology research agenda is to consistently align the increasing intelligence and autonomy of machines with the needs and abilities of the human—a human-centred neuroadaptive technology research roadmap. Due to its far-reaching social implications, our research and developments do not only face technological but also social challenges. If neuroadaptive technologies are applied in non-medical areas, they must be consistently oriented to the needs and ethical values of the users and society.
Article
Neurotechnology has traditionally been central to the diagnosis and treatment of neurological disorders. While these devices have initially been utilized in clinical and research settings, recent advancements in neurotechnology have yielded devices that are more portable, user friendly, and less expensive. These improvements allow laypeople to monitor their brain waves and interface their brains with external devices. Such improvements have led to the rise of wearable neurotechnology that is marketed to the consumer. While many of the consumer devices are marketed for innocuous applications, such as use in video games, there is potential for them to be repurposed for medical uses. How do we manage neurotechnologies that skirt the line between medical and consumer applications and what can be done to ensure consumer safety? Here, we characterize neurotechnology based on medical and consumer applications and summarize currently marketed uses of consumer-grade wearable headsets. We lay out concerns that may arise due to the similar claims associated with both medical and consumer devices, the possibility of consumer devices being repurposed for medical uses, and the potential for medical uses of neurotechnology to influence commercial markets related to employment and self-enhancement.
Article
Electroencephalogram (EEG), as a direct response to brain activity, can be used to detect mental states and physical conditions. Among various EEG-based emotion recognition studies, due to the non-linear, non-stationary and the individual difference of EEG signals, traditional recognition methods still have the disadvantages of complicated feature extraction and low recognition rates. Thus, this paper first proposes a novel concept of electrode-frequency distribution maps (EFDMs) with short-time Fourier transform (STFT). Residual block based deep convolutional neural network (CNN) is proposed for automatic feature extraction and emotion classification with EFDMs. Aim at the shortcomings of the small amount of EEG samples and the challenge of differences in individual emotions, which makes it difficult to construct a universal model, this paper proposes a cross-datasets emotion recognition method of deep model transfer learning. Experiments carried out on two publicly available datasets. The proposed method achieved an average classification score of 90.59% based on a short length of EEG data on SEED, which is 4.51% higher than the baseline method. Then, the pre-trained model was applied to DEAP through deep model transfer learning with a few samples, resulted an average accuracy of 82.84%. Finally, this paper adopts the gradient weighted class activation mapping (Grad-CAM) to get a glimpse of what features the CNN has learned during training from EFDMs and concludes that the high frequency bands are more favorable for emotion recognition.
Article
Full-text available
Systematic reviews should build on a protocol that describes the rationale, hypothesis, and planned methods of the review; few reviews report whether a protocol exists. Detailed, well-described protocols can facilitate the understanding and appraisal of the review methods, as well as the detection of modifications to methods and selective reporting in completed reviews. We describe the development of a reporting guideline, the Preferred Reporting Items for Systematic reviews and Meta-Analyses for Protocols 2015 (PRISMA-P 2015). PRISMA-P consists of a 17-item checklist intended to facilitate the preparation and reporting of a robust protocol for the systematic review. Funders and those commissioning reviews might consider mandating the use of the checklist to facilitate the submission of relevant protocol information in funding applications. Similarly, peer reviewers and editors can use the guidance to gauge the completeness and transparency of a systematic review protocol submitted for publication in a journal or other medium.
Article
Full-text available
In this paper we review classification algorithms used to design brain–computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.
Article
Full-text available
Using recent regional brain activation/emotion models as a theoretical framework, we examined whether the pattern of regional EEG activity distinguished emotions induced by musical excerpts which were known to vary in affective valence (i.e., positive vs. negative) and intensity (i.e., intense vs. calm) in a group of undergraduates. We found that the pattern of asymmetrical frontal EEG activity distinguished valence of the musical excerpts. Subjects exhibited greater relative left frontal EEG activity to joy and happy musical excerpts and greater relative right frontal EEG activity to fear and sad musical excerpts. We also found that, although the pattern of frontal EEG asymmetry did not distinguish the intensity of the emotions, the pattern of overall frontal EEG activity did, with the amount of frontal activity decreasing from fear to joy to happy to sad excerpts. These data appear to be the first to distinguish valence and intensity of musical emotions on frontal electrocortical measures.
Article
Full-text available
The central question of this Frontiers Research Topic is: What can we learn from brain and other physiological signals about an individual's cognitive and affective state and how can we use this information? This question reflects three important issues which are addressed by the 22 articles in this volume: (1) the combination of central and peripheral neurophysiological measures; (2) the diversity of cognitive and affective processes reflected by these measures; and (3) how to apply these measures in real world applications.
Article
Full-text available
Brain-computer interfaces (BCIs) can serve as muscle independent communication aids. Persons, who are unable to control their eye muscles (e.g., in the completely locked-in state) or have severe visual impairments for other reasons, need BCI systems that do not rely on the visual modality. For this reason, BCIs that employ auditory stimuli were suggested. In this study, a multiclass BCI spelling system was implemented that uses animal voices with directional cues to code rows and columns of a letter matrix. To reveal possible training effects with the system, 11 healthy participants performed spelling tasks on 2 consecutive days. In a second step, the system was tested by a participant with amyotrophic lateral sclerosis (ALS) in two sessions. In the first session, healthy participants spelled with an average accuracy of 76% (3.29 bits/min) that increased to 90% (4.23 bits/min) on the second day. Spelling accuracy by the participant with ALS was 20% in the first and 47% in the second session. The results indicate a strong training effect for both the healthy participants and the participant with ALS. While healthy participants reached high accuracies in the first session and second session, accuracies for the participant with ALS were not sufficient for satisfactory communication in both sessions. More training sessions might be needed to improve spelling accuracies. The study demonstrated the feasibility of the auditory BCI with healthy users and stresses the importance of training with auditory multiclass BCIs, especially for potential end-users of BCI with disease.
Article
Full-text available
Albeit research on brain-computer interfaces (BCI) for controlling applications has expanded tremendously, we still face a translational gap when bringing BCI to end-users. To bridge this gap, we adapted the user-centered design (UCD) to BCI research and development which implies a shift from focusing on single aspects, such as accuracy and information transfer rate (ITR), to a more holistic user experience. The UCD implements an iterative process between end-users and developers based on a valid evaluation procedure. Within the UCD framework usability of a device can be defined with regard to its effectiveness, efficiency, and satisfaction. We operationalized these aspects to evaluate BCI-controlled applications. Effectiveness was regarded equivalent to accuracy of selections and efficiency to the amount of information transferred per time unit and the effort invested (workload). Satisfaction was assessed with questionnaires and visual-analogue scales. These metrics have been successfully applied to several BCI-controlled applications for communication and entertainment, which were evaluated by end-users with severe motor impairment. Results of four studies, involving a total of N = 19 end-users revealed: effectiveness was moderate to high; efficiency in terms of ITR was low to high and workload low to medium; depending on the match between user and technology, and type of application satisfaction was moderate to high. The here suggested evaluation metrics within the framework of the UCD proved to be an applicable and informative approach to evaluate BCI controlled applications, and end-users with severe impairment and in the locked-in state were able to participate in this process.
Article
Full-text available
Unlike assistive technology for verbal communication, the brain–machine or brain–computer interface (BMI/BCI) has not been established as a nonverbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients’ emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based nonverbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600–700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus. This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals.
Article
By focus group methodology, we examined the opinions and requirements of persons with ALS, their caregivers, and health care assistants with regard to developing a brain-computer interface (BCI) system that fulfills the user's needs. Four overarching topics emerged from this analysis: 1) lack of information on BCI and its everyday applications; 2) importance of a customizable system that supports individuals throughout the various stages of the disease; 3) relationship between affectivity and technology use; and 4) importance of individuals retaining a sense of agency. These findings should be considered when developing new assistive technology. Moreover, the BCI community should acknowledge the need to bridge experimental results and its everyday application. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Article
Despite intense brain-computer interface (BCI) research for >2 decades, BCIs have hardly been established at patients' homes. The current study aimed at demonstrating expert independent BCI home use by a patient in the locked-in state and the effect it has on quality of life. In this case study, the P300 BCI-controlled application Brain Painting was facilitated and installed at the patient's home. Family and caregivers were trained in setting up the BCI system. After every BCI session, the end user indicated subjective level of control, loss of control, level of exhaustion, satisfaction, frustration, and enjoyment. To monitor BCI home use, evaluation data of every session were automatically sent and stored on a remote server. Satisfaction with the BCI as an assistive device and subjective workload was indicated by the patient. In accordance with the user-centered design, usability of the BCI was evaluated in terms of its effectiveness, efficiency, and satisfaction. The influence of the BCI on quality of life of the end user was assessed. At the patient's home. A 73-year-old patient with amyotrophic lateral sclerosis in the locked-in state. Not applicable. The BCI has been used by the patient independent of experts for >14 months. The patient painted in about 200 BCI sessions (1-3 times per week) with a mean painting duration of 81.86 minutes (SD=52.15, maximum: 230.41). BCI improved quality of life of the patient. In most of the BCI sessions the end user's satisfaction was high (mean=7.4, SD=3.24; range, 0-10). Dissatisfaction occurred mostly because of technical problems at the beginning of the study or varying BCI control. The subjective workload was moderate (mean=40.61; range, 0-100). The end user was highy satisfied with all components of the BCI (mean 4.42-5.0; range, 1-5). A perfect match between the user and the BCI technology was achieved (mean: 4.8; range, 1-5). Brain Painting had a positive impact on the patient's life on all three dimensions: competence (1.5), adaptability (2.17) and self-esteem (1.5); (range: -3 = maximum negative impact; 3 maximum positive impact). The patient had her first public art exhibition in July 2013; future exhibitions are in preparation. Independent BCI home use is possible with high satisfaction for the end user. The BCI indeed positively influenced quality of life of the patient and supports social inclusion. Results demonstrate that visual P300 BCIs can be valuable for patients in the locked-in state even if other means of communication are still available (eye tracker). Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.