PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Mental Imagery based Brain-Computer Interfaces (MI-BCI) enable their users to control an interface, e.g., a prosthesis, by performing mental imagery tasks only, such as imagining a right arm movement while their brain activity is measured and processed by the system. Designing and using a BCI requires users to learn how to produce different and stable patterns of brain activity for each of the mental imagery tasks. However, current training protocols do not enable every user to acquire the skills required to use BCIs. These training protocols are most likely one of the main reasons why BCIs remain not reliable enough for wider applications outside research laboratories. Learning companions have been shown to improve training in different disciplines, but they have barely been explored for BCIs so far. This article aims at investigating the potential benefits learning companions could bring to BCI training by improving the feedback, i.e., the information provided to the user, which is primordial to the learning process and yet have proven both theoretically and practically inadequate in BCI. This paper first presents the potentials of BCI and the limitations of current training approaches. Then, it reviews both the BCI and learning companion literature regarding three main characteristics of feedback: its appearance, its social and emotional components and its cognitive component. From these considerations, this paper draws some guidelines, identify open challenges and suggests potential solutions to design and use learning companions for BCIs.
Content may be subject to copyright.
Towards Artificial Learning Companions for Mental
Imagery-based Brain-Computer Interfaces
L. Pillette
Inria, LaBRI (Univ. Bordeaux,
CNRS, Bordeaux-INP), France
lea.pillette@inria.fr
C. Jeunet
Univ. Rennes, Inria, IRISA, CNRS,
France / CNBI, EPFL, Switzerland
camille.jeunet@inria.fr
R. N’Kambou
GDAC, UQAM, Quebec
Canada
nkambou@gmail.com
B. N’Kaoua
Handicap, Activity, Cognition, Health,
Univ. Bordeaux, CNRS, France
bernard.nkaoua@u-bordeaux.fr
F. Lotte
Inria, LaBRI (Univ. Bordeaux,
CNRS, Bordeaux-INP), France
fabien.lotte@inria.fr
ABSTRACT
Mental Imagery based Brain-Computer Interfaces (MI-BCI) enable
their users to control an interface, e.g., a prosthesis, by performing
mental imagery tasks only, such as imagining a right arm move-
ment while their brain activity is measured and processed by the
system. Designing and using a BCI requires users to learn how to
produce dierent and stable patterns of brain activity for each of
the mental imagery tasks. However, current training protocols do
not enable every user to acquire the skills required to use BCIs.
These training protocols are most likely one of the main reasons
why BCIs remain not reliable enough for wider applications outside
research laboratories. Learning companions have been shown to
improve training in dierent disciplines, but they have barely been
explored for BCIs so far. This article aims at investigating the po-
tential benets learning companions could bring to BCI training by
improving the feedback, i.e., the information provided to the user,
which is primordial to the learning process and yet have proven
both theoretically and practically inadequate in BCI. This paper
rst presents the potentials of BCI and the limitations of current
training approaches. Then, it reviews both the BCI and learning
companion literature regarding three main characteristics of feed-
back: its appearance, its social and emotional components and its
cognitive component. From these considerations, this paper draws
some guidelines, identify open challenges and suggests potential
solutions to design and use learning companions for BCIs.
RESUME
Les interfaces cerveau-ordinateur (ICO) exploitant l’imagerie men-
tale permettent à leurs utilisateurs d’envoyer des commandes à une
interface, une prothèse par exemple, uniquement en réalisant des
tâches d’imagerie mentale, tel qu’imaginer son bras droit bouger.
Lors de la réalisation de ces tâches, l’activité cérébrale des utilisa-
teurs est enregistrée et analysée par le système. An de pouvoir
utiliser ces interfaces, les utilisateurs doivent apprendre à produire
diérents motifs d’activité cérébrale stables pour chacune des tâches
d’imagerie mentale. Toutefois, les protocoles d’entraînement ex-
istants ne permettent pas à tous les utilisateurs de maîtriser les
compétences nécessaires à l’utilisation des ICO. Ces protocoles
d’entraînements font très probablement partie des raisons princi-
pales pour lesquelles les ICO manquent de abilité et ne sont pas
plus utilisées en dehors des laboratoires de recherche. Or, les com-
pagnons d’apprentissage, qui ont déjà permis d’améliorer l’ecacité
d’apprentissage pour diérentes disciplines, sont encore à peine
étudiés pour les ICO. L’objectif de cet article est donc d’explorer
les diérents avantages qu’ils pourraient apporter à l’entraînement
aux ICO en améliorant le retour fait à l’utilisateur, c’est-à-dire les
informations fournies concernant la tâche. Ces dernières sont pri-
mordiales à l’apprentissage et pourtant, il a été montré qu’à la fois
théoriquement et en pratique ces dernières étaient inadéquates.
Tout d’abord, seront présentés dans l’article les potentiels des ICO
et les limitations des protocoles d’entraînement actuels. Puis, une re-
vue de la littérature des ICO ainsi que des compagnons d’apprentissage
est réalisée concernant trois caractéristiques principales du retour
utilisateur, c’est-à-dire son apparence, ses composantes sociale et
émotionnelle et enn sa composante cognitive. À partir de ces
considérations, ce papier fournit quelques recommandations, iden-
tie des dés à relever et suggère des solutions potentielles pour
concevoir et utiliser des compagnons d’apprentissage en ICO.
KEYWORDS
Brain-Computer Interface, Learning Companion, Aective Feed-
back, Social Feedback
1 INTRODUCTION
A Brain Computer Interface (BCI) can be dened as a technology
that enables its users to interact with computer applications and
machines by using their brain activity alone [
13
]. In most BCIs,
brain activity is measured using Electroencephalography (EEG),
which uses electrodes placed on the scalp to record small electrical
currents reecting the activity of large populations of neurons
[
13
]. In a BCI, EEG signals are processed and classied, in order to
assign a specic command to a specic EEG pattern. For instance,
a typical BCI system can enable a user to move a cursor to the
left or right on a computer screen, by imagining left or right hand
movements, each imagined movement leading to a specic EEG
pattern [
62
]. In this article we focus on Mental Imagery-based BCI
(i.e., MI-BCI) with which users have to consciously modify their
brain activity by performing mental imagery tasks (e.g., imagining
hand movements or mental calculations) [
13
,
62
]. MI-BCIs require
the users to train to adapt their own strategies to perform the mental
imagery task based on the feedback they are provided with. At the
L. Pillee, C. Jeunet, R. N’Kambou, B. N’Kaoua, and F. Loe
end of the training, the system should recognize which task the
user is performing as accurately as possible. However, it has been
shown, both theoretically and practically, that the existing training
protocols do not provide an adequate feedback for acquiring these
BCI skills [
26
,
48
]. This, among other reasons, could explain why
BCIs still lack reliability and that around 10 to 30% of users cannot
use them at all [
45
,
58
]. Several experiments showed that taking
into account recommendations from the educational psychology
eld, e.g., providing a multisensorial feedback, can improve BCI
performances and user-experience [
41
,
76
]. However, researches
using a social and emotional feedback remain scarce despite the
fact that it is recommended by educational psychology [20].
Indeed, it has been hypothesized that our social behavior had
a major inuence in the development of our brain and cognitive
abilities [
16
,
84
]. Social interaction was traditionally involved in the
intergenerational transmission of practices and knowledge. How-
ever, its importance for learning was acknowledged only recently
with the development of the social interdependence theory, which
states that the achievement of one person’s goal, i.e., here learning,
depends on the action of others. Cooperative learning builds on this
idea and promotes collaboration between students in order to reach
their common goal [
30
]. These theories and methods have shown
that learning can be strengthened by a social feedback [11, 25].
Articial learning companions, which are animated conversa-
tional agents involved in an interaction with the user [
11
], could
provide such social and emotional feedback. Physiological and neu-
rophysiological data recordings oer the possibility to infer users’
states/traits and to adapt the behavior of the companion accord-
ingly [
7
,
9
]. The training would benet from the later, for example
the diculty of the task could be modulated in order to keep the
user motivated. In particular, the feedback provided during the
training could be improved, e.g., by adapting to the emotional state
of the user. Learning companions are therefore able to take into
account the cognitive abilities and aective states of users, and
to provide them with emotional or cognitive support. They have
already proven to be eective for improving learning of dierent
abilities, e.g., mathematics or informatics, [
10
,
34
]. From all types
of computational supports which enrich the social context during
learning (i.e., educational agent) we chose to focus on learning
companions because they engage in a non-authoritative interaction
with the user, can have several roles ranging from collaborator, to
competitor or teachable student and could potentially involve using
several of them with complementary roles [11].
Learning companions could contribute to improving BCI training
by, among other, enriching the social context of BCI. This articles
aims at identifying the various benets that learning companions
can bring to BCI training, and how they can do so. To achieve this
objective, this article starts by detailing the principles and appli-
cations of BCI as well as the limitations of current BCI training
protocols. Once the keys to understanding BCIs provided, this ar-
ticle focuses on three main components of BCI feedback, which
should be improved in order to improve BCI training. First of all,
we study the appearance of feedback, which is one of its most stud-
ied characteristics. Second, we study its social component, i.e., the
amount of interaction the user has with a social entity during the
learning task, and emotional component, i.e., the feedback com-
ponents which aim at eliciting an emotional response from the
user. Both are still scarcely used in BCI though the existing results
seem to be promising. Finally, we will concentrate on its cognitive
component, i.e., which information to provide users in order to
improve their understanding of the task, which represents one of
the main challenge in designing BCI feedbacks. For each of these
three feedback components, we propose a review of the literature
for both the BCI and learning companion elds to deduce from
them some guidelines, challenges and potential research directions.
2 BRAIN COMPUTER INTERFACE SKILLS
2.1 BCIs principes and applications
Since they make computer control possible without any physical
movement, MI-BCIs rapidly became promising for a number of
applications [
14
]. They can notably be used by severely motor
impaired users, to control various assisting technologies such as
prostheses or wheelchairs [
54
]. More recently, MI-BCIs were shown
to be promising for stroke rehabilitation as well, as they can be
used to guide stroke patients to stimulate their own brain plasticity
towards recovery [
2
]. Finally, MI-BCIs can also be used beyond
medical applications [
79
], for instance for gaming, multimedia or
hand-free control, among many other possible applications [
14
].
However, as it has been mentioned, despite these many promising
applications, current EEG-based MI-BCIs are unfortunately not
really usable, i.e., they are not reliable nor ecient enough [
13
,
14
,
45
]. In particular, the mental commands from the users are too often
incorrectly recognized by the MI-BCI. There is thus a pressing need
to make them more usable, so that they can deliver their promises.
Controlling a MI-BCI is a skill that needs to be learned and
rened: the more users practice, the better they become at MI-BCI
control, i.e., their mental commands are recognized correctly by
the system increasingly more often [
27
]. Learning to control an
MI-BCI is made possible thanks to the use of neurofeedback (NF)
[
75
]. NF consists in showing users a feedback on their brain activity,
and/or as with BCI, in showing them which mental command was
recognized by the BCI, and how well so. This is typically achieved
using a visual feedback, e.g., a gauge displayed on screen, reecting
the output of the machine learning algorithm used to recognize the
mental commands from EEG signals [
58
] (see Figure 1). This guides
users to learn to perform the MI tasks increasingly better, so that
they are correctly recognized by the BCI. Thus, human learning
principles need to be considered in BCI training procedures [46].
2.2 Limitations of the current training protocol
Currently, most MI-BCI studies are based on the Graz training pro-
tocol or on variants of the latter. This protocol relies on a two stage
procedure [
62
]: (1) training the system and (2) training the user.
In stage 1, the user is instructed to successively perform a certain
series of MI tasks (for example, left and right hand MI). Using the
recordings of brain activity generated as these various MI tasks are
performed, the system attempts to extract characteristic patterns of
each of the mental tasks. These extracted features are used to train
a classier, the goal of which is to determine the class to which
the signals belong. Then, in stage 2 users are instructed to perform
the MI tasks, but this time feedback (based on the system training
performed in stage 1) is provided to inform them of the MI task
recognized by the system. The user’s goal is to develop eective
Towards Artificial Learning Companions for Mental Imagery-based Brain-Computer Interfaces
strategies that will allow the system to easily recognize the MI
tasks that they are performing. Along such training, participants
are asked to perform specic mental tasks repeatedly, e.g., imagin-
ing left or right-hand motor imagery, and are provided with a visual
feedback shown as a bar indicating the recognized task and the
corresponding condence level of the classier (see Figure 1). Un-
Figure 1: Example of feedback which is often provided to
users during training, i.e., right and left hand motor imagery
training here. At the moment the picture was taken the user
had to imagine moving his left-hand. The blue bar indicates
which task has been recognized and how condent the sys-
tem is in its recognition. The longer the bar and the most
condent the system is. Here the system rightly recognize
the task that the user is performing and is quite condent
about it [62].
fortunately, such standard training approaches satisfy very few of
the guidelines from human learning psychology and instructional
design to ensure an ecient skill acquisition [
48
]. For instance,
a typical BCI training session provides a uni-modal (visual) and
corrective feedback (indicating whether the learner performed the
task correctly) (see Figure 1), using xed and (reported as) boring
training tasks identically repeated until the user achieves a certain
level of performance, with these training tasks being provided syn-
chronously. In contrast, it is recommended to provide a multi-modal
and explanatory feedback (indicating what was right/wrong about
the task performed by the user) that is goal-oriented (indicating
a gap between the current performance and the desired level of
performance), in an engaging and challenging environment, using
varied training tasks with adaptive diculty [53, 74].
Moreover, it is necessary to consider users’ motivational and
cognitive states to ensure they can perform and learn eciently
[
33
]. Keller states that optimizing motivational factors - Attention
(triggering a person’s curiosity), Relevance (the compliance with a
person’s motives or values), Condence (the expectancy for suc-
cess), and Satisfaction (by intrinsic and extrinsic rewards) - leads to
more user eorts towards the task and thus to better performance.
In short, current standard BCI training approaches are both
theoretically [
48
] and practically [
26
] suboptimal, and are unlikely
to enable ecient learning of BCI-related skills. Articial intelligent
agents such as learning companions could provide tools to improve
several characteristics of BCI training.
3 BUILDING BCI LEARNING COMPANION -
EXISTING TOOLS AND CHALLENGES
Learning companions have been dened by [11] as follows:
In an extensive denition, a learning companion is
a computer-simulated character, which has
human-
like characteristics
and plays a
non-authoritative
role in a social learning environment.
This denition oers three main points that will be elaborated in the
BCI context in the following section. First, the learning companion
must facilitate the learning process in particular by encouraging
the learner in a social learning activity. Using an anthropomorphic
appearance facilitates this social context. Furthermore, its inter-
ventions should be consistent with the general recommendation
concerning feedback which would also contribute to its human
likeness and its eciency (See Section 3.1).
Second, learning companions are educational agents, i.e., compu-
tational supports which enrich the social context during learning
[
11
]. Such environment could provide a motivating and engaging
context that would favor learning (See Section 3.2).
Finally, the benet of a learning companion over the other types
of educational agents is that its role can greatly vary from student
to tutor given the learning model used and the knowledge that
the companion holds. At the moment, an educational agent with
an authoritative role of teacher is not realistic because of the lack
of a cognitive model of the task. Such a model would provide in-
formation about how the learner’s prole (i.e., traits and states)
inuences BCI performance and which feedback to provide accord-
ingly [
26
,
28
]. It would be necessary to understand, predict and
therefore improve the acquisition of BCI skills (See Section 3.3).
3.1 Appearance of feedback
As stated above, the appearance of the learning companion greatly
impacts its inuence on the user. BCI performances are also inu-
enced by the appearance of the feedback that is provided during
training. Therefore, numerous researches have been and are still
led toward improving this characteristic of the feedback.
3.1.1 BCI Literature. While it is recognized that feedback im-
proves learning, many authors have attempted to clarify which fea-
tures enhance this eect [
3
,
5
,
57
]. To be eective, feedback should
be directive (indicating what needs to be revised), facilitative (pro-
viding suggestions to guide learners) and should oer verication
(specifying if the answer is correct or incorrect). It should also be
goal-directed by providing information on the progress of the task
with regard to the goal to be achieved. Finally, feedback should be
specic, clear, purposeful and meaningful. These dierent features
increase the motivation and the engagement of learners [
22
,
68
,
81
].
As already underlined in [
48
], classical BCI feedback satises few
of such requirements. Generally, BCI feedbacks are not explanatory
(they do not explain what was good or bad nor why it is so), nor
goal directed and do not provide details about how to improve
the answer. Moreover, they are often unclear and do not have any
intrinsic meaning to the learner. For example, BCI feedback is often
a bar representing the output of the classier, which is a concept
most BCI users are unfamiliar with.
L. Pillee, C. Jeunet, R. N’Kambou, B. N’Kaoua, and F. Loe
Recently, some promising areas of research have been investi-
gated. For example, the study in [
40
] showed that performances are
enhanced when feedback is adapted to the characteristics of the
learners. In their study, positive feedback, i.e., feedback provided
only for a correct response, was benecial for new or inexperienced
BCI users, but harmful for advanced BCI users.
Several studies also focused on the modalities of feedback pre-
sentation. The work in [
64
] used BCI for motor neurorehabilitation
and observed that proprioceptive feedback (feeling and seeing hand
movements) improved BCI performance signicantly. In a recent
study in [
29
], the authors tested a continuous tactile feedback by
comparing it to an equivalent visual feedback. Performance was
higher with tactile feedback indicating that this modality can be
a promising way to enhance BCI performances. The study in [
76
]
showed that multimodal (visual and auditory) continuous feed-
back was associated with better performance and less frustration
compared to the conventional bar feedback.
Other studies investigated new ways of providing some task
specic and more tangible feedback. In [
18
] and [
52
], the authors
created tools using augmented reality to display the user’s EEG
activity on the head of a tangible humanoid called Teegi (see Figure
2), and superimposed on the reection of the user, respectively.
Figure 2: User visualizing his brain activity using Teegi [18].
These researches contribute to make feedback more attractive
which can have a benecial impact. For example, it has been shown
that using game-like, 3D or Virtual Reality increase user engage-
ment and motivation [66].
3.1.2 Learning Companion Literature. Numerous researches re-
garding the appearance which would maximize the acquisition of a
skill/ability were led as well for learning companions. Some main
points seem to emerge from them, here are some that could be
useful while designing one companion for BCI purpose:
Physical, tangible companion seems to increase social pres-
ence in comparison to a virtual companion [23, 70]
Anthropomorphic features facilitate social interactions [
15
]
Physical characteristics, personality/abilities, functionalities
and learning function should be consistent [60]
Interestingly, the inuence of learning companions was also studied
using measures of brain activity. For example, using functional
Magnetic Resonance Imaging (fMRI) [
38
] investigated the neural
correlates of the attribution of intentions and desires (i.e., theory of
mind) for dierent robot features. Results show that theory of mind-
related cortical activity is positively correlated to the perceived
human-likeness of a robot. This implies that the more realistic the
robots, the more people attribute them intentions and desires.
3.1.3 Futures challenges. Feedbacks that are both adapted and
adaptive to users are lacking in both the BCI and learning com-
panion literatures. Numerous researches have been and are still
led toward identifying feedback characteristics and learner char-
acteristics inuencing BCI performance. However, the often low
number of participants in current experiments limits those results
and further researches should be led to clarify the type of feedback
to provide depending on the user’s prole.
Additionally, an interesting research direction could be to use
several learning companions, including Teegi or another tangible
system which could display the brain activity of the user. Each
companion could have a dierent role and one of them could be
a tutor which would provide insights about how to interpret the
information related to brain activity displayed.
3.2 Social & Emotional feedback
Learning companions are more than just another mean to provide
feedback. Their main benets is that they enrich the social context
of learning and can provide emotional feedback. As mentioned, BCI
training still lacks such elements in its feedback though current
literature tends to indicate that it would benet from them.
3.2.1 BCI Literature. Indeed, [
59
] showed that mood, assessed
prior to each BCI session (using a quality of life questionnaire),
correlates with BCI performances. Some BCI experiments provided
emotional feedback using smiling faces to indicate the user if the
task performed had been recognized by the system [
39
,
43
]. Though,
none of these studies used a control group. Therefore, the impact of
such a feedback remains unknown for BCI applications. A similar
study was led in neurofeedback by [
49
] who showed that providing
participants with an emotional and social feedback as a reward
enabled better control than a typical moving bar over the activation
of the dorsal anterior cingulate cortex (ACC) monitored using fMRI.
The feedback consisted of an avatar’s smile whose width varied
depending on the user’s performance. The better the performance,
the wider the smile was. This type of feedback can be considered
as both emotional and social because of the use of an avatar.
The use of social feedback in BCI has been encouraged in several
papers [
48
,
50
,
73
]. The work in [
25
] showed that a social feedback
can be considered as a reward just as much as a monetary one. Yet,
the inuence of a reward has already been demonstrated in BCI.
Indeed, it has been shown that a monetary reward can modulate the
amplitude of some brain activity, including the one involved during
MI-BCI [
35
,
72
]. However, researches about the use of a social
feedback in BCI remain scarce and often lack of control groups.
One of the main original purpose of BCI was to enable their users
to communicate and some researchers have created tools to provide
such type of communication in social environments, for example
Towards Artificial Learning Companions for Mental Imagery-based Brain-Computer Interfaces
using Twitter [
17
] but no comparison was made with equivalent
nonsocial environment. Studies from [
8
], [
61
] and [
19
] presented
games where users played in pairs collaborating and/or competing
against each other. The study in [
8
] found that this type of learning
context proved successful to signicantly improve user-experience
and the performances of the best performing users.
Finally, we explored the use of social and emotional feedback
when creating PEANUT (i.e., Personalized Emotional Agent for Neu-
rotechnology User Training), which is the rst learning companion
dedicated to BCI training [
63
]. Its interventions were composed of
spoken sentences and displayed facial expression in between two
trials (see Figure 3). The interventions were selected based on the
performance and progression of the user. We tested PEANUT’s in-
uence on user’s performance and experience during BCI training
using two groups of users with similar proles. One group was
trained to use BCI with PEANUT and the other without. Our re-
sults indicated that the user experience was improved when using
PEANUT. Indeed, users felt that they were learning and memorizing
better when they were learning with PEANUT. Even though their
mean performances were not changed, the variability of the per-
formances from the group with PEANUT were signicantly higher
than for the other group. Such result might indicate a dierential
eect of learning companions on users [9].
3.2.2 Learning Companion Literature. Several other studies have
shown the interest of learning-companions as source of social link
that is sometimes essential in certain learning situations [
44
,
69
].
They can play dierent roles such as co-learner, co-tutor, etc. in
which they are often called upon to demonstrate certain capacities
of social interaction such as empathy through emotional feedback
[44] or respect for social norms [31, 69].
Emotional feedback aims at regulating the emotions of the learner
throughout the learning. Positive emotions are known to improve
problem solving, decision-making, and creation, while negative
emotions are harmful in these situations [
24
]. Previous studies
regarding emotional feedback investigated emotional regulation
strategies to manage learners’ emotions and behaviors [
6
,
9
,
51
].
The positive impact of emotional feedback has also been highlighted
in some educational contexts [77].
In addition, it is important to adapt the social interaction to each
learner. Indeed, it has been shown that a companion that adapts
its behavior to learners’ prole increases the development of their
positive attitude [21].
Learning companions are sometimes embodied in robots to better
materialize social presence. Tega is a social companion robot which
interprets students’ emotional response – measured from facial
expressions – in a game aimed at learning Spanish vocabulary
[
21
]. It approximates the emotions of the learner and over time,
determines the impact of these emotions on the learner to nally
create a personalized motivational strategy adapted to the later.
To ensure adaptation, machine learning techniques are often de-
ployed. With the advancement of Articial Intelligence (AI), more
ecient techniques are now used to help the companion to better
learn from the learner’s behavior. In case of the social companion
NICO (a Neuro-Inspired COmpanion robot), the model used for the
learning of the emotions and the adaptation to the user is a combi-
nation of a Convolutional Neural Network and a Self-organization
Map to recognize an emotion from the user’s facial expression, and
learn to express the same [
12
]. The model allows the robot to adapt
to a dierent user by associating the perceived emotion with an
appropriate expression which makes the companion more socially
acceptable in the environment in which it operates.
Figure 3: Experimental setting where PEANUT (on the left)
provides a user with social presence and emotional support
adapted to his performance and progression [63].
3.2.3 Futures challenges. As mentioned above, assessing users’
emotional states is particularly useful for learning companions.
However, doing so reliably remains a challenge, particularly for
covert emotions, that are thus not visible in facial expressions.
Passive BCI, with which the brain activity is analyzed to provide
metrics related to the mental state of the user to adapt the system
accordingly could be used for this purpose [
85
]. Though monitoring
emotional states remains a challenge because experiments com-
pare their emotional recognition performances using self reported
measures from the users. Such experimental protocol assumes that
people are able to self assess reliably their own emotions which
might not be true [
65
]. Furthermore, the brain structures involved in
emotional states are sub-cortical, e.g., amygdala [
42
], which means
that reliably monitoring them using non invasive EEG is an issue
[
56
]. Nevertheless, some promising results were found in particular
using EEG [56] (see Table 1).
It also remains to be evaluated whether using a learning com-
panion can help to reduce the fear induced by the BCI setup that
has a detrimental eect on BCI performances [83].
3.3 Cognitive feedback
Such as social and emotional feedback, cognitive feedback consti-
tute another challenge and present a great opportunity to improve
BCI training. According to Balzer et al. [
4
], providing cognitive feed-
back “refers to the process of presenting the person information about
the relations in the environment (i.e., task information), relations
L. Pillee, C. Jeunet, R. N’Kambou, B. N’Kaoua, and F. Loe
perceived by the person (i.e., cognitive information), and relations
between the environment and the person’s perceptions of the envi-
ronment (i.e., functional validity information)”. They suggest that
the task information is the type of cognitive feedback inuencing
the most the performance. Therefore, providing BCI users with
information about the way they do vs. should perform the MI-tasks
is most likely of the utmost importance.
3.3.1 BCI Literature. Currently, the most used cognitive feed-
back in MI-BCI is the classication accuracy (CA), i.e., the percent-
age of mental commands that are correctly recognized by the system
[
27
]. While informative, this feedback remains only evaluative: it
provides some information about how well the learner does per-
form the task, but no information about how they should perform
it. Some studies have been led in order to enrich this feedback. [
32
]
proposed a richer “multimodal” feedback providing information
about the task recognized by the classier, the strength/condence
in this recognition as well as the dynamics of the classier output
throughout the whole trial. [
76
] chose to add information concern-
ing the stability of the EEG signals to the standard feedback based
on CA, while [
71
] added an explanatory feedback based on the level
of muscular relaxation to this CA-based feedback. This additional
feedback was used to explain poor CA as a positive correlation had
been previously suggested between muscular relaxation and CA.
Finally, [
86
] provided learners with a 2-dimensional feedback based
on a basketball metaphor: ball movements along the horizontal
axis were determined by classication of contra- versus ipsilateral
activity (i.e., between the two brain’s hemispheres), whereas verti-
cal movements resulted from classifying contralateral activity of
baseline versus MI interval.
By adding some dimensions to the standard classication CA-
based feedback, these feedbacks provided more information to the
learner about the way to improve their performance. Nonetheless,
all of them are still mainly based on the CA which may not be ap-
propriate to assess users’ learning [
47
]. Indeed, CA may not reect
properly successful EEG pattern self-regulation. Yet, learning to
self-regulate specic EEG patterns, and more specically to gener-
ate stable and distinct patterns for each MI task are the skills to be
acquired by the learner [28].
3.3.2 Learning Companion Literature. Beside emotional (aec-
tive) and social assistance, learning companions can also be de-
signed to provide a cognitive support to the learner. In this perspec-
tive, many solutions exist in the eld of intelligent tutoring systems
(ITS), which use computational tools to tutor the learner. For in-
stance, the companion strategy can be based on the current student
learning path compared to an explicit cognitive model of the task
which highlights the dierent solution paths and skills involved [
1
].
A learning path gathers the actions taken by the learner (providing
an answer, asking for help, taking notes, etc), and the context of
these actions (e.g., did the learner attempted an answer before ask-
ing for help?). Recognizing learners’ learning path and skills used
can also be done using a constraint-based model of the task [
55
] or
a model of the task learnt using relevant machine learning or data
mining techniques. Whatever approach is used, the goal is to create
a model where a learning companion can act and track learners’
actions or behavior to determine how they learn and provide them
with an eective cognitive accompaniment or assistance.
On the sidelines of these cognitive tutors, example tracing tutors
[
36
] have been newly developed. They elaborate their feedback
by comparing the actual strategy of the user with some previous
correct and incorrect strategies, which means that they do not
require any preexisting cognitive model of the task. This type of
tutoring is based on imitating the successful behavior of others. Two
types of imitations are possible, one by studying worked examples,
the other by directly observing someone else performing the task
[80].
3.3.3 Futures challenges. The latter second type of imitation
based training has already proven useful in BCI by [
37
]. They
showed that BCI training could be enhanced by having users watch
someone performing the motor task they imagined. Though provid-
ing the users with worked examples has never been tried and might
be worth exploring by using a learning companion to provide those
worked examples. In order to do so, the users would have to explicit
Recommendations Challenges Potential solutions
Appearance of
Feedback
BCI feedback should take into con-
sideration of recommendations from
educational psychology, e.g., multi-
sensorial [76] or attractive [66]
The feedback remain mostly un-
adaptive and/or unadapted and re-
searches toward improving this are
made dicult by the often few num-
ber of participants.
Using learning companions to pro-
vide task related feedback and ex-
plain to the users how their brain
activity is modied when they per-
form a task.
Social and emo-
tional Feedback
BCI training should be engaging and
motivating.
Assessing users’ states, e.g., the emo-
tional one, often remains unreliable,
still needs training and therefore
time.
Passive BCI could still be used to
monitor the learner’s state, e.g., emo-
tion or motivation but also the level
of attention or fatigue in order to
adapt the training.
Cognitive Feed-
back
The feedback should provide in-
sights and guidance to the user.
A cognitive model is still lacking and
limits the improvement of the train-
ing.
Using an example based learning
companion, which do not require a
cognitive model of the task could im-
prove learning.
Table 1: Summary of the dierent recommendations, challenges and potential solutions raised in this article.
Towards Artificial Learning Companions for Mental Imagery-based Brain-Computer Interfaces
the dierent strategies they used to control the BCI. One way to do
so could be by teaching the companion. This represents a challenge
because of the variety of strategies users can use which would then
have to be analyzed, but also because the verbalization of motor-
related strategies is subjective. Methods developed for clarifying
interview and user experience assessment could be adapted in or-
der to clarify these verbalizations [
82
]. Such researches could be
linked to the semiotic training suggested for BCI, which consists
in training participants to improve their capacity to associate their
mental imagery strategies with their BCI performances [
78
]. The
benet of these methods is that they do not require a cognitive
model of the task though they could help determine learning paths
and prove useful to develop such cognitive model.
Indeed, in order to be able to provide more relevant cognitive
feedback to BCI learners, we should rst deepen our theoretical
knowledge about the MI-BCI skills and about their underlying
processes. Very little work has been performed by the community
to model MI-BCI tasks and thus the skills to be acquired. Thus, the
challenges to address (see table 1) are the following:
(1)
Dene and implement a computational cognitive model of
MI-BCI tasks [28]
(2)
Based on this model, determine which skills should be ac-
quired
(3)
Based on these skills, dene relevant measures of perfor-
mance
(4)
Based on these measures of performance, design cognitive
feedback to help BCI learner to achieve a high performance,
i.e., to acquire the target skills
4 DISCUSSION & CONCLUSION
In this article, we have shown that BCIs are promising interaction
systems enabling users to interact using their brain activity only.
However, they require users to train so they can control them,
and so far, this training has been suboptimal. Here, we hope we
demonstrated how articial learning companions could contribute
to improving this training. In particular, we reviewed how such
companions could be used to provide user-adapted and adaptive
feedback at the social, emotional and cognitive levels. While there
have been various researches on the appearance of BCI feedback,
there is almost no research on social, emotional and cognitive feed-
back for BCI. Learning companion could bridge that gap. Reviewing
the literature in learning companion, we suggested various ways
to make that happen and the corresponding research challenges
that will need to be solved. They are summarized in Table 1.
To conclude, the denition from [
11
] (i.e., “a learning companion
is a computer-simulated character, which has human-like char-
acteristics and plays a non-authoritative role in a social learning
environment”) is especially interesting because it involves an ex-
change of knowledge between the learner and the learning compan-
ion. This builds on the idea that, on the one hand the BCI trainee
could benet from social, emotional and cognitive feedback that the
learning companion would provide. While on the other hand, the
model maintained by the learning companion could benet from
the learners’ feedback to be better adapted.
Both the psychological prole and the cognitive state of the
learner have an inuence on the capacity to use a BCI and the type
of learning companion that can be the most eective. Therefore,
creating models to understand 1) which state the users go through
while learning 2) how the psychological characteristics and cog-
nitive states of the user inuence the learning and nally 3) how
to provide an adapted feedback according to the previous points
represent a common goal for the BCI and Learning companion
elds where both could benet each other.
Even though we focused on the improvement learning com-
panions could bring to the feedback, the benets are not limited
to it. For example, they could also be used to assess or limit the
potential experimenter bias, which occurs when experimenters’
expectation or knowledge involuntarily inuence their subjects
[
67
]. Indeed, they could limit the need for an experimenter and
make it easier to perform double blind experiments, where both
subjects and experimenters do not know to which experimental
group they belong.
Acknowledgements. This work was supported by the French Na-
tional Research Agency (project REBEL, grant ANR-15-CE23-0013-
01) and the European Research Council (project BrainConquest,
grant ERC-2016-STG-714567).
REFERENCES
[1]
V. Aleven, I. Roll, B. M. McLaren, and K. R. Koedinger. Automated, unobtrusive,
action-by-action assessment of self-regulation during learning with an intelligent
tutoring system. Educational Psychologist, 45(4):224–233, 2010.
[2]
K. Ang and C. Guan. Brain–computer interface for neurorehabilitation of upper
limb after stroke. Proceedings of the IEEE, 103(6):944–953, 2015.
[3]
R. Azevedo and R. Bernard. A meta-analysis of the eects of feedback in computer-
based instruction. J Educ Comp Res, 13(2):111–127, 1995.
[4]
W. Balzer, M. Doherty, et al. Eects of cognitive feedback on performance.
Psychological bulletin, 106(3):410, 1989.
[5]
R. Bangert-Drowns, C. Kulik, J. Kulik, and M. Morgan. The instructional eect of
feedback in test-like events. Review of educational research, 61(2):213–238, 1991.
[6]
R. Beale and C. Creed. Aective interaction: How emotional agents aect users.
International Journal of Human-Computer Studies, 67(9):755–776, 2009.
[7]
O. Bent, P. Dey, K. Weldemariam, and M. Mohania. Modeling user behavior data
in systems of engagement. Futur Gen Comp Sys, 68:456–464, 2017.
[8]
L. Bonnet, F. Lotte, and A. Lécuyer. Two brains, one game: design and evaluation
of a multiuser BCI video game based on motor imagery. IEEE Transactions on
Computational Intelligence and AI in games, 5(2):185–198, 2013.
[9]
W. Burleson and R. Picard. Gender-specic approaches to developing emotionally
intelligent learning companions. IEEE Intelligent Systems, 22(4), 2007.
[10]
R. Cabada, M. Estrada, C. Garcia, Y. Pérez, et al. Fermat: merging aective tutoring
systems with learning social networks. In Proc ICALT, pages 337–339, 2012.
[11]
C. Chou, T. Chan, and C. Lin. Redening the learning companion: the past,
present, and future of educational agents. Computers & Education, 40(3), 2003.
[12]
N. Churamani, M. Kerzel, E. Strahl, P. Barros, and S. Wermter. Teaching emotion
expressions to a human companion robot using deep neural architectures. In
Proc IJCNN, pages 627–634, 2017.
[13]
M. Clerc, L. Bougrain, and F. Lotte. Brain-Computer Interfaces 1: Foundations and
Methods. ISTE-Wiley, 2016.
[14]
M. Clerc, L. Bougrain, and F. Lotte. Brain-Computer Interfaces 2: Technology and
Applications. ISTE-Wiley, 2016.
[15]
B. Duy. Anthropomorphism and the social robot. Robotics and autonomous
systems, 42(3):177–190, 2003.
[16]
R. I. Dunbar and S. Shultz. Evolution in the social brain. science, 317(5843), 2007.
[17]
G. Edlinger and C. Guger. Social environments, mixed communication and goal-
oriented control application using a brain-computer interface, volume 6766 LNCS
of Lecture Notes in Computer Science. 2011.
[18]
J. Frey, R. Gervais, S. Fleck, F. Lotte, and M. Hachet. Teegi: Tangible EEG interface.
In Proc ACM UIST, pages 301–308, 2014.
[19]
R. Goebel, B. Sorger, J. Kaiser, N. Birbaumer, and N. Weiskopf. Bold brain pong:
Self regulation of local brain activity during synchronously scanned, interacting
subjects. In 34th Annual Meeting of the Society for Neuroscience, 2004.
[20] D. Goleman. Emotional Intelligence. New York: Brockman. Inc, 1995.
[21]
G. Gordon, S. Spaulding, J. Westlund, J. Lee, L. Plummer, M. Martinez, M. Das,
and C. Breazeal. Aective personalization of a social robot tutor for children’s
second language skills. In AAAI, pages 3951–3957, 2016.
L. Pillee, C. Jeunet, R. N’Kambou, B. N’Kaoua, and F. Loe
[22]
J. Hattie and H. Timperley. The power of feedback. Review of educational research,
77(1):81–112, 2007.
[23]
E. Hornecker. The role of physicality in tangible and embodied interactions.
Interactions, 18(2):19–23, 2011.
[24]
A. Isen. An inuence of positive aect on decision making in complex situations:
Theoretical issues with practical implications. Journal of consumer psychology,
11(2):75–85, 2001.
[25]
K. Izuma, D. Saito, and N. Sadato. Processing of Social and Monetary Rewards in
the Human Striatum. Neuron, 58(2):284–294, Apr. 2008.
[26]
C. Jeunet, E. Jahanpour, and F. Lotte. Why standard brain-computer interface
(bci) training protocols should be changed: an experimental study. Journal of
neural engineering, 13(3):036024, 2016.
[27]
C. Jeunet, F. Lotte, and B. N’Kaoua. Human Learning for Brain–Computer Interfaces,
pages 233–250. Wiley Online Library, 2016.
[28]
C. Jeunet, B. N’Kaoua, and F. Lotte. Towards a cognitive model of MI-BCI user
training. 2017.
[29]
C. Jeunet, C. Vi, D. Spelmezan, B. N’Kaoua, F. Lotte, and S. Subramanian. Contin-
uous tactile feedback for motor-imagery based brain-computer interaction in a
multitasking context. In Human-Computer Interaction, pages 488–505, 2015.
[30]
D. Johnson and R. Johnson. An educational psychology success story: Social
interdependence theory and cooperative learning. Educational researcher, 2009.
[31]
W. Johnson and P. Rizzo. Politeness in tutoring dialogs:“run the factory, that’s
what i’d do”. In Intelligent Tutoring Systems, pages 206–243. Springer, 2004.
[32]
T. Kaufmann, J. Williamson, E. Hammer, R. Murray-Smith, and A. Kübler. Visually
multimodal vs. classic unimodal feedback approach for smr-bcis: a comparison
study. Int. J. Bioelectromagn, 13:80–81, 2011.
[33]
J. Keller. An integrative theory of motivation, volition, and performance. Tech-
nology, Instruction, Cognition, and Learning, 6(2):79–104, 2008.
[34]
Y. Kim. Pedagogical agents as learning companions: Building social relations
with learners. In AIED, pages 362–369, 2005.
[35]
S. Kleih, F. Nijboer, S. Halder, and A. Kübler. Motivation modulates the P300
amplitude during brain–computer interface use. Clinical Neurophysiology, 2010.
[36]
K. Koedinger, V. Aleven, B. McLaren, and J. Sewall. Example-tracing tutors: A
new paradigm for intelligent tutoring systems. Authoring Intelligent Tutoring
Systems, pages 105–154, 2009.
[37]
T. Kondo, M. Saeki, Y. Hayashi, K. Nakayashiki, and Y. Takata. Eect of in-
structive visual stimuli on neurofeedback training for motor imagery-based
brain–computer interface. Human movement science, 43:239–249, 2015.
[38]
S. Krach, F. Hegel, B. Wrede, G. Sagerer, F. Binkofski, and T.Kircher. Can machines
think? interaction and perspective taking with robots investigated via fmri. PloS
one, 3(7):e2597, 2008.
[39]
A. Kübler, N. Neumann, J. Kaiser, B. Kotchoubey, T. Hinterberger, and N. Bir-
baumer. Brain-computer communication: self-regulation of slow cortical poten-
tials for verbal communication. Arch phys med rehab, 82(11), 2001.
[40]
A. Kübler, B. Kotchoubey, J. Kaiser, J. Wolpaw, and N. Birbaumer. Brain–computer
communication: Unlocking the locked in. Psychological bulletin, 127(3):358, 2001a.
[41]
A. Lécuyer, F. Lotte, R. Reilly, R. Leeb, M. Hirose, and M. Slater. Brain-computer
interfaces, virtual reality, and videogames. Computer, 41(10), 2008.
[42] J. LeDoux. Emotion: Clues from the brain. Ann rev psych, 46(1):209–235, 1995.
[43]
R. Leeb, F. Lee, C. Keinrath, R. Scherer, H. Bischof, and G. Pfurtscheller.
Brain–computer communication: motivation, aim, and impact of exploring a
virtual apartment. IEEE Trans Neur Sys Rehab, 15(4):473–482, 2007.
[44]
J. Lester, S. Converse, S. Kahler, S. Barlow, B. Stone, and R. Bhogal. The persona
eect: aective impact of animated pedagogical agents. In Proc ACM CHI, 1997.
[45]
F. Lotte. Towards Usable Electroencephalography-based Brain-Computer Interfaces.
Habilitation thesis (HDR), Univ. Bordeaux, 2016.
[46]
F. Lotte and C. Jeunet. Towards improved BCI based on human learning principles.
In 3rd International Brain-Computer Interfaces Winter Conference, 2015.
[47]
F. Lotte and C. Jeunet. Online classication accuracy is a poor metric to study
mental imagery-based BCI user learning: an experimental demonstration and
new metrics. In 7th International BCI Conference, 2017.
[48]
F. Lotte, F. Larrue, and C. Mühl. Flaws in current human training protocols
for spontaneous brain-computer interfaces: lessons learned from instructional
design. Frontiers in human neuroscience, 7, 2013.
[49]
K. Mathiak, E. Alawi, Y. Koush, M. Dyck, J. Cordes, T. Gaber, F. Zepf, N. Palomero-
Gallagher, P. Sarkheil, S. Bergert, M. Zvyagintsev, and K. Mathiak. Social reward
improves the voluntary control over localized brain activity in fMRI-based neu-
rofeedback training. Frontiers in Behavioral Neuroscience, 9(June), 2015.
[50]
J. Mattout. Brain-Computer Interfaces: A Neuroscience Paradigm of Social
Interaction? A Matter of Perspective. Frontiers in Human Neuroscience, 6, 2012.
[51]
S. McQuiggan, J. Robison, and J. Lester. Aective transitions in narrative-centered
learning environments. Educational Technology & Society, 13(1):40–53, 2010.
[52]
J. Mercier-Ganady, F. Lotte, E. Loup-Escande, and A. Marchal, M.and Lecuyer.
The mind-mirror: See your brain in action in your head using eeg and augmented
reality. In Virtual Reality (VR), 2014 iEEE, pages 33–38. IEEE, 2014.
[53]
M. Merrill. First principles of instruction: a synthesis. Trends and issues in
instructional design and technology, 2:62–71, 2007.
[54]
J. Millán, R. Rupp, G. Müller-Putz, R. Murray-Smith, C. Giugliemma, M. Tanger-
mann, C. Vidaurre, F. Cincotti, A. Kübler, R. Leeb, C. Neuper, K.-R. Müller, and
D. Mattia. Combining brain-computer interfaces and assistive technologies:
State-of-the-art and challenges. Frontiers in Neuroprosthetics, 2010.
[55]
A. Mitrovic. Modeling domains and students with constraint-based modeling.
Advances in intelligent tutoring systems, pages 63–80, 2010.
[56]
C. Mühl, B. Allison, A. Nijholt, and G. Chanel. A survey of aective brain
computer interfaces: principles, state-of-the-art, and challenges. Brain-Computer
Interfaces, 1(2):66–84, 2014.
[57] S. Narciss and K. Huth. How to design informative tutoring feedback for multi-
media learning. Instructional design for multimedia learning, 181195, 2004.
[58]
C. Neuper and G. Pfurtscheller. Brain-Computer Interfaces, chapter Neurofeedback
Training for BCI Control, pages 65–78. The Frontiers Collection, 2010.
[59]
F. Nijboer, A. Furdea, I. Gunst, J. Mellinger, D. McFarland, N. Birbaumer, and
A. Kübler. An auditory brain–computer interface (BCI). J Neur Meth, 2008.
[60] D. Norman. How might people interact with agents. Comm ACM, 37(7), 1994.
[61]
M. Obbink, H. Gürkök, D. Plass-Oude Bos, G. Hakvoort, M. Poel, and A. Nijholt.
Social interaction in a cooperative brain-computer interface game. LNICST. 2012.
[62]
G. Pfurtscheller and C. Neuper. Motor imagery and direct brain-computer com-
munication. proceedings of the IEEE, 89(7):1123–1134, 2001.
[63]
L. Pillette, C. Jeunet, B. Mansencal, R. N’Kambou, B. N’Kaoua, and F. Lotte.
Peanut: Personalised emotional agent for neurotechnology user-training. In 7th
International BCI Conference, 2017.
[64]
A. Ramos-Murguialday, M. Schürholz, V. Caggiano, M. Wildgruber, A. Caria,
E. Hammer, S. Halder, and N. Birbaumer. Proprioceptive feedback and brain
computer interface (bci) based neuroprostheses. PloS one, 7(10):e47048, 2012.
[65]
M. Robinson and G. Clore. Belief and feeling: evidence for an accessibility model
of emotional self-report. Psychological bulletin, 128(6):934, 2002.
[66]
R. Ron-Angevin and A. Díaz-Estrella. Brain–computer interface: Changes in
performance using virtual reality techniques. Neur let, 449(2):123–127, 2009.
[67]
R. Rosnow and R. Rosenthal. People studying people: Artifacts and ethics in
behavioral research. WH Freeman, 1997.
[68]
R. Ryan and E. Deci. Self-determination theory and the facilitation of intrinsic
motivation, social development, and well-being. Am psych, 55(1):68, 2000.
[69]
M. Saerbeck, T. Schut, C. Bartneck, and M. Janse. Expressive robots in education:
varying the degree of social supportive behavior of a robotic tutor. In Proc CHI,
pages 1613–1622, 2010.
[70]
M. Schmitz. Tangible interaction with anthropomorphic smart objects in instru-
mented environments. 2010.
[71]
J. Schumacher, C. Jeunet, and F. Lotte. Towards explanatory feedback for user
training in brain-computer interfaces. In Proc IEEE SMC, pages 3169–3174, 2015.
[72]
P. Sepulveda, R. Sitaram, M. Rana, C. Montalba, C. Tejos, and S. Ruiz. How
feedback, motor imagery, and reward inuence brain self-regulation using real-
time fmri. Human brain mapping, 37(9):3153–3171, 2016.
[73]
C. Sexton. The overlooked potential for social factors to improve eectiveness of
brain-computer interfaces. Frontiers in Systems Neuroscience, 9(May):1–5, 2015.
[74] V. Shute. Focus on formative feedback. Rev Edu Res, 78:153–189, 2008.
[75]
R. Sitaram, T. Ros, L. Stoeckel, S. Haller, F. Scharnowski, J. Lewis-Peacock,
N. Weiskopf, M. Blefari, M. Rana, E. Oblak, et al. Closed-loop brain training: the
science of neurofeedback. Nature Reviews Neuroscience, 2016.
[76]
T. Sollfrank, A. Ramsay, S. Perdikis, J. Williamson, R. Murray-Smith, R. Leeb,
J. Millán, and A. Kübler. The eect of multimodal and enriched feedback on
SMR-BCI performance. Clinical Neurophysiology, 127(1):490–498, 2016.
[77]
V. Terzis, C. Moridis, and A. Economides. The eect of emotional feedback on
behavioral intention to use computer based assessment. Computers & Education,
59(2):710–721, 2012.
[78]
M. Timofeeva. Semiotic training for brain-computer interfaces. In Proc FedCSIS,
pages 921–925, 2016.
[79]
J. van Erp, F. Lotte, and M. Tangermann. Brain-computer interfaces: Beyond
medical applications. IEEE Computer, 45(4):26–34, 2012.
[80]
T. Van Gog and N. Rummel. Example-based learning: Integrating cognitive and
social-cognitive research perspectives. Edu Psych Rev, 22(2):155–174, 2010.
[81]
S. Williams. Teachers’ written comments and students’ responses: A socially
constructed interaction. 1997.
[82]
C. Wilson. Interview techniques for UX practitioners: A user-centered design method.
Newnes, 2013.
[83]
M. Witte, S. Kober, M. Ninaus, C. Neuper, and G. Wood. Control beliefs can
predict the ability to up-regulate sensorimotor rhythm during neurofeedback
training. Frontiers in Human Neuroscience, 7, 2013.
[84] O. Ybarra, E. Burnstein, P. Winkielman, M. Keller, M. Manis, E. Chan, and J. Ro-
driguez. Mental exercising through simple socializing: Social interaction pro-
motes general cognitive functioning. Personality and Social Psychology Bulletin,
34(2):248–259, 2008.
[85]
T. Zander and S. Jatzev. Detecting aective covert user states with passive
brain-computer interfaces. In Proc ACII, pages 1–9, 2009.
[86]
C. Zich, S. Debener, M. De Vos, S. Frerichs, S. Maurer, and C. Kranczioch. Later-
alization patterns of covert but not overt movements change with age: An eeg
neurofeedback study. Neuroimage, 116:80–91, 2015.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This review organizes a variety of phenomena related to emotional self-report. In doing so, the authors offer an accessibility model that specifies the types of factors that contribute to emotional self-reports under different reporting conditions. One important distinction is between emotion, which is episodic, experiential, and contextual, and beliefs about emotion, which are semantic, conceptual, and decontextualized. This distinction is important in understanding the discrepancies that often occur when people are asked to report on feelings they are currently experiencing versus those that they are not currently experiencing. The accessibility model provides an organizing framework for understanding self-reports of emotion and suggests some new directions for research.
Conference Paper
Full-text available
Human companion robots need to be sociable and responsive towards emotions to better interact with the human environment they are expected to operate in. This paper is based on the Neuro-Inspired COmpanion robot (NICO) and investigates a hybrid, deep neural network model to teach the NICO to associate perceived emotions with expression representations using its on-board capabilities. The proposed model consists of a Convolutional Neural Network (CNN) and a Self-organising Map (SOM) to perceive the emotions expressed by a human user towards NICO and trains two parallel Multilayer Perceptron (MLP) networks to learn general as well as person-specific associations between perceived emotions and the robot’s facial expressions.
Article
Full-text available
Neurofeedback is a psychophysiological procedure in which online feedback of neural activation is provided to the participant for the purpose of self-regulation. Learning control over specific neural substrates has been shown to change specific behaviours. As a progenitor of brain–machine interfaces, neurofeedback has provided a novel way to investigate brain function and neuroplasticity. In this Review, we examine the mechanisms underlying neurofeedback, which have started to be uncovered. We also discuss how neurofeedback is being used in novel experimental and clinical paradigms from a multidisciplinary perspective, encompassing neuroscientific, neuroengineering and learning-science viewpoints.
Article
Though substantial research has been dedicated towards using technology to improve education, no current methods are as effective as one-on-one tutoring. A critical, though relatively understudied, aspect of effective tutoring is modulating the student's affective state throughout the tutoring session in order to maximize long-term learning gains. We developed an integrated experimental paradigm in which children play a second-language learning game on a tablet, in collaboration with a fully autonomous social robotic learning companion. As part of the system, we measured children's valence and engagement via an automatic facial expression analysis system. These signals were combined into a reward signal that fed into the robot's affective reinforcement learning algorithm. Over several sessions, the robot played the game and personalized its motivational strategies (using verbal and non-verbal actions) to each student. We evaluated this system with 34 children in preschool classrooms for a duration of two months. We saw that (1) children learned new words from the repeated tutoring sessions, (2) the affective policy personalized to students over the duration of the study, and (3) students who interacted with a robot that personalized its affective feedback strategy showed a significant increase in valence, as compared to students who interacted with a non-personalizing robot. This integrated system of tablet-based educational content, affective sensing, affective policy learning, and an autonomous social robot holds great promise for a more comprehensive approach to personalized tutoring.
Article
Despite an increasing focus on the neural basis of human decision making in neuroscience, relatively little attention has been paid to decision making in social settings. Moreover, although human social decision making has been explored in a social psychology context, few neural explanations for the observed findings have been considered. To bridge this gap and improve models of human social decision making, we investigated whether acquiring a good reputation, which is an important incentive in human social behaviors, activates the same reward circuitry as monetary rewards. In total, 19 subjects participated in functional magnetic resonance imaging (fMRI) experiments involving monetary and social rewards. The acquisition of one's good reputation robustly activated reward-related brain areas, notably the striatum, and these overlapped with the areas activated by monetary rewards. Our findings support the idea of a "common neural currency" for rewards and represent an important first step toward a neural explanation for complex human social behaviors.
Chapter
This chapter gives an idea of the current state of research of Brain-Computer Interfaces (BCI) learning protocols. The BCI community now recognizes that in order to achieve an improvement in performance, the user must be included in the loop, and so learning protocols must be improved accordingly. It have also shown that by building on theories in disciplines such as the psychology of learning, it is possible to suggest new, promising approaches for improving user performance. The chapter focuses on protocols developed for teaching users how to use BCIs based on mental imagery (MI), also known as spontaneous BCIs. One protocol was suggested by researchers in Graz based on techniques of machine learning, and the other was suggested by the researchers at the Wadsworth center based on an operant conditioning approach. Finally, the chapter presents possible avenues for improving learning protocols, in particular based on an “anthropocentric” perspective.
Article
The proliferation of mobile devices has changed the way digital information is consumed and its efficacy measured. These personal devices know a lot about user behavior from embedded sensors along with monitoring the daily activities users perform through various applications on these devices. This data can be used to get a deep understanding of the context of the users and provide personalized services to them. However, there are lot of challenges in capturing, modeling, storing, and processing such data from these systems of engagement, both in terms of achieving the right balance of redundancy in the captured and stored data, along with ensuring the usefulness of the data for analysis. There are additional challenges in balancing how much of the captured data should be processed through client or server applications. In this article we present the modeling of user behavior in the context of personalized education which has generated a lot of recent interest. More specifically, we present an architecture and the issues of modeling student behavior data, captured from different activities the student performs during the process of learning. The user behavior data is modeled and sent to the cloud-enabled backend where detailed analytics are performed to understand different aspects of a student, such as engagement, difficulties, preferences etc. and to also analyze the quality of the data.