Conference PaperPDF Available

Towards a Brain-Robot Interface for children

Authors:

Abstract and Figures

Brain-Computer Interface systems have been widely studied and explored with adults demonstrating the possibility to achieve augmentative communication and control directly from the users' brain. Nevertheless, the study and the exploitation of the BCI in children seems to be limited. In this paper we propose and present for the first time a Brain-Robot Interface enabling children to mentally drive a robot. With this regards, we exploit the combination of a P300-based Brain-Computer Interface and a shared-autonomy approach to achieve a reliable and safe robot navigation. We tested our system in a pilot study involving five children. Our preliminary results highlight the advantages of using an accumulation framework, thanks to which the performance of the children reached the 81.67 % ± 12.7 on average in terms of accuracy. During the experiments, the shared-autonomy approach involved a low-level intelligent control on board of the robot to avoid obstacles, enabling an effective navigation also with a small number of commands.
Content may be subject to copyright.
Towards a Brain-Robot Interface for children
Gloria Beraldo1, Stefano Tortora1and Emanuele Menegatti1
Abstract Brain-Computer Interface systems have been
widely studied and explored with adults demonstrating the
possibility to achieve augmentative communication and control
directly from the users’ brain. Nevertheless, the study and
the exploitation of the BCI in children seems to be limited.
In this paper we propose and present for the first time a
Brain-Robot Interface enabling children to mentally drive a
robot. With this regards, we exploit the combination of a
P300-based Brain-Computer Interface and a shared-autonomy
approach to achieve a reliable and safe robot navigation. We
tested our system in a pilot study involving five children.
Our preliminary results highlight the advantages of using an
accumulation framework, thanks to which the performance of
the children reached the 81.67 % ±12.7 on average in terms
of accuracy. During the experiments, the shared-autonomy
approach involved a low-level intelligent control on board of
the robot to avoid obstacles, enabling an effective navigation
also with a small number of commands.
I. INTRODUCTION
Brain-Computer Interfaces (BCIs) are a well-known tech-
nology able to detect and translate electrical signals produced
by the brain activity into outputs communicating the user’s
intent without the participation of peripheral nerves and mus-
cles [1]. For people with neurodegenerative diseases, these
interfaces can provide an alternative form of communication,
implementing a direct mind-control of external devices. In
this perspective, thanks to the knowledge acquired over the
last 15 years about the brain functions and the progress in
robotics, people have been able to control different devices
such as new generation of neuroprostheses, wheelchairs,
telepresence robots and robotic arms [2], [3], [4].
Despite BCI systems have been widely investigated and
explored over the years in different contexts, most studies
focused the attention on adult subjects. To the best of our
knowledge, the exploitation of BCI in children seems to
be strongly limited. Surely the lack of procedures, guide-
lines and recommendation according to the evidence-based
medicine paradigm plays a significant role. However, as men-
tioned in [5], the incident rates of severe neurological disor-
ders in children should be not overlooked. The consequences
of them can be various and severe depending on lesion
location and size, its cause and the age of the young patients:
motor disorders, seizures, cognitive and neuropsychological
disturbances [6]. Nevertheless, effective medical treatments
and rehabilitation approaches can significantly influence the
therapy outcome and especially in children it can be more
promising than in adults thanks to their brain plasticity
[7]. In this context, the exploitation of advance techniques
1Intelligent Autonomous System Lab, Department of Information Engi-
neering, University of Padova, Padua, Italy. beraldog, tortora,
emg@dei.unipd.it
Fig. 1. A child mentally navigating a service robot by the proposed Brain-
Robot Interface
like BCI might be a future prospective for neurological
rehabilitation and physiotherapy for children with severe
neurological deficits.
In this paper we propose a preliminary system to navigate
a robot by a Brain-Robot Interface. To the best of our
knowledge, this is the first one which enables children to
mentally control a service robot in a non structured envi-
ronment (see Fig. 1). To this aim, we adapted our previous
system, based on a semi-autonomous navigation algorithm,
which was already tested in adults [8]. In that work, the
robot takes as input high level commands from the user
and autonomously deals with low level problems, such as
obstacle avoidance and the search of the best trajectory
to arrive to the destination. However, in comparison to
our previous work, in order to facilitate the control of the
robot for children, we simplified the system both in the
BCI protocol and in the feedback given to the user. In
details, we used the intuitive BCI based on visual event-
related potentials (P300) [9] and we modified the graphical
interface to give a richer feedback to the children. All these
aspects are particularly important when users are children
[10] and motivated our designed choices. This work aims
at developing a preliminary Brain-Robot Interface enabling
children to mentally drive robots. We assume that this kind
of system can be useful also to expand the knowledge about
BCI in children. Indeed, the presented work is inserted
in a long term project with the purpose of validating the
proposed Human-Robot Interface in a paediatric context with
the collaboration of neurophysiologists, psychologists and
children experts.
This paper is organized as follows. Section II describes the
BCI system, the robot and our semi-autonomous navigation
algorithm. In Section IV we show and we evaluate our
preliminary results. Finally in Section V we discuss the
Fig. 2. A) Topographic distribution of P300 potentials in a sample of adult and children subjects. B) Schematic representation of the visual graphical
interface used to stimulate the subjects. Top row: the protocol we used during the training is shown. The user was instructed by a symbolic cue appearing
in the centre of the screen about the target image he/she has to focus his/her attention. Then, all the images flash block-randomized for several blocks.
Thus, a visual feedback is provided to the user based on the BCI prediction, indicating the end of the trial. A green (if classified image is equal to the
target)/red (otherwise) box is added around the classified image. During the BCI training we did not use the robot, therefore no feedback is provided
by it. Bottom row: The protocol during the robot navigation through the BCI is the same as during the training with the exception that there is no cue
because the user decides autonomously on which image he/she focus his/her attention. When the BCI system has a new outcome, the classified image
was highlighted with a green box and the corresponding command is sent to the ROS infrastructure. While the robot is performing the current behavior,
the user is not stimulated until an image representing the robot’s face appears in the centre of the graphical interface. It represents an additional feedback
from ROS system indicating the execution of the command by the robot and a signal for recalling the user’s attention to send the next command.
proposed system and we present our future directions.
II. METHODS
A. Brain-Computer Interface system
From the point of view of the BCI system, we chose to
apply an visual P300 BCI [9]. Indeed, as previous studies
with adults showed [11], [12], the P300 signal is easy to
recognize in almost every person. For further information,
the Fig. 2A shows the P300 waves of the children and adults
subjects involving in this study.
Moreover, P300 approach is more intuitive than other
BCI paradigms (e.g those based on Sensorimotor Rhythm
(SMR)). Therefore, also the task required to the subject is
very simple to be explained by the operator and especially
to be understood by children. At the end, extensive training
is not necessary, reducing the risk of losing the attention and
the participation of the children.
In the following parts, we briefly describe the different
components of the BCI system we used for the study.
Paradigm
The user can drive the robot by concentrating on the
graphical interface depicted in the right box of Fig. 2B.
In order to elicit the P300 visual event-related potential,
four coloured arrows are flashed in a random sequence
in the four boxes on the top, bottom, left and right
positions, corresponding respectively to the following
four commands for the robot: FORWARD, STOP/GO
BACK, TURN LEFT, TURN RIGHT. The flashing
arrows appear in a random sequence with permutation
without repetition; the sequence is called a block. We
set the flash period (i.e. the interval in which the arrow
is turned on) to 0.15 s and the inter-flash period (the
amount of time between two consecutive flashes) to
0.55 s, resulting in an inter-stimulus interval (ISI) of
0.7 s. Thus, each block lasts for 2.8 s. The sequence of
blocks is grouped in a trial. At the end of each trial,
the BCI classifier, based on the P300 signals of the user,
predicts the command selected by the user and shows it
on the screen (the green box in Fig. 2B). The command
is sent to the robot for the execution. After the execution
a new trial starts. At the beginning of a trial a small icon
with the face of the robot is shown for 1 s to be sure the
children do not miss the first flashes of the first block.
Several trials are grouped in a run.
EEG Acquisition and Preprocessing
EEG data was recorded using a portable g.tec system
(g.tec medical engineering, Austria), receiving in input
16 channels. Electrodes were placed over the frontal
and parieto-occipital areas (Fz, FC3, FC4, C3, Cz, C4,
CP3, CP4, P7, P3, Pz, P4, P8, PO3, PO4, Oz) according
to the international 10 20 system layout (see Fig.
2A). Samples were recorded at 512 Hz sampling rate.
The signal inside the 0.45 s time-window epoch after
every stimulus was acquired and a butterworth 4th order
digital band pass filtered in the range 1-24 Hz was
applied, to remove baseline drift and high-frequency
noise. Then, a common average reference (CAR) filter
was performed and the signal was decimated by a factor
of 8. To reduce the effect of eye-blink, eye-movement
and muscle artefact, the signal from each channel was
winsorized, by computing the 10th and 90th percentiles
of the signal amplitude and by replacing every sample
outside this range with the value of 10th and 90th
percentile, respectively. Finally, we applied a z-score
normalization to account for trial-to-trial and day-to-day
variability.
Feature extraction and Classification
The resulting samples for each channel were concate-
nated, creating a features vector for each trial (the length
of feature vector was 64 Hz/(1/0.45 s) samples ×16
channels = 464). Offline evaluation of the system was
performed for each subject by means of a 4-fold cross-
validation over the training dataset. Once this set of fea-
tures ˆ
xwas extracted, the next step was the classification
phase. In this system, we applied the Bayesian Linear
Discriminant Analysis (BLDA), that was extensively
studied for P300 classification problems [13], [14], [15].
Briefly, BLDA can be seen as a regularized version
of Linear Discriminant Analysis (LDA), in which the
weight vector w, such that the discriminant function is
equal to t(x) = wTx, is assumed to be a latent variable
and estimated using Bayesian inference [13]:
P(w|β, α, D) = P(D|β, w)P(w|α)
RP(D|β, w)P(w|α)dw (1)
where Dis the training dataset, P(D|β, w)and P(w|α)
are the likelihood function and prior of the weights,
assumed to be Gaussian. The likelihood function of
the weights is estimated from the training dataset D,
while the prior distribution is set equal to a isotropic
multivariate Gaussian, with diagonal covariance matrix
of value α. Finally, βand αare hyperparameters of
the classifier and can be iteratively estimated from the
training dataset, maximizing the likelihood. Detailed
description and implementation of the algorithm can be
found in [13], [16], [17].
At each trial, when a new feature vector ˆ
xis received,
the output of the classification was given by wTˆ
xwith
wequal to the mean of P(w|β, α, D), representing the
raw posterior probability to belong to the target class.
However, to accumulate evidence of the user’s, the raw
probabilities were integrated linearly over blocks of
stimulus Nb. More precisely, the classifier provides four
posterior probabilities (one for each class) computed
by summing over blocks for each image. The class C,
maximizing that sum, is selected and converted to the
corresponding command to send to the robot:
C= argmax
c∈{1,2,3,4}
Nb
X
b=1
wTˆ
xb,c (2)
The user is informed about the output of the classifica-
tion through a visual feedback.
Feedback
The system provides to the user two different feedbacks
(see Fig. 2B). One is designed to inform the user
about the input detected by the BCI (a box around
the classified image). The other aims to notify the user
about the execution of the received command from the
robot side. It consists in a small image representing the
robot’s face that appears in the centre of the graphical
interface, when it is going to finish the execution of the
current command and start the next planning. The aim is
to draw the attention of the user on focusing again on the
graphical interface to send the next command. Indeed,
in the case of exogenous BCI, it is crucial the user is
heedful to the graphical interface since it represents also
the medium through which they are stimulated.
B. Robot
In this work, we exploited Pepper robot1, designed by
Aldebaran Robotics and released in 2015 by SoftBank (see
Fig. 2B). It is a human-shaped robot that was created to be
a day-to-day companion. Thus, it was optimized for human
interaction and it is able to engage with people. The robot
is 121 cm tall and it features 20 DOFs for natural and
expressive movements. It has a 1.9 GHz quad-core Atom
processor and 4 GB of RAM. For the navigation purposes,
it is equipped with an omnidirectional base (0.480 ×0.425
m), two sonars, three bumpers, three laser sensors, an inertial
unit and actuators. It provides also 2D and 3D cameras, touch
sensors, LEDs and microphones for multimodal interactions.
III. EXP ERI MENTA L DES IGN
The experiments were performed in two separate days
(two sessions) per subject in order to reduce the workload
required to the children. The first day is dedicated to the
explanation of the protocol and to the BCI training, while
the second one to the control of the robot. In details the
training consisted in three runs including 8 trials per run (2
per each target image). The number of blocks Nbper trial
during the training was chosen randomly between 7 and 9 in
order to avoid habituation and expectation in the user. The
subject was instructed by a symbolic cue appearing in the
centre of the screen, about the target image on which he/she
has to focus his/her attention (see Fig. 2B). In addition he/she
was asked to not move, speak or blink during each run. The
duration of each run was 3 minutes on average. After each
run we included breaks and talks with the user.
We decided to calibrate the system after the first run,
in order to engage more the child by providing a visual
feedback based on the outcome of the BCI starting from the
second run. Precisely, a green (if classified image is equal to
1https://www.softbankrobotics.com/emea/en/pepper
Fig. 3. The experimental environment: the user sat at position S and the
robot is positioned in R at the beginning. The user was asked to mentally
drive the robot from R to the three target locations T1, T2, T3 consecutively.
the target)/red (otherwise) box is added around the classified
image in Fig. 2B. Then we trained the classifier after each
run using all the available data.
In the second day, the user was required to perform only
a new training run before driving the robot. We updated the
classifier using the last three runs (2 acquired in the first day,
1 in the second day) and we chose manually the Nbto be
used during the navigation phase according to the presence
of a plateau in the average accuracy resulting in output from
the cross-validation.
During the robot’s navigation, the subject sat in front of a
laptop at 1 m distant from a 15.6” display at position S. At
the beginning, the robot was positioned in R and it is partially
viewed by the user. The user was asked to move the robot
from S, going consecutively through three target positions
T1, T2, T3, by sending mental commands via the BCI (see
Fig. 3). The user was aware of the robot’s movements by
watching its position in a map of the environment, that
is provided to the robot for localization and navigation
purposes.
A. Subjects
Five healthy children subjects (age 10.4 ±2.19, 1 female)
without any previous experience with BCI accepted to take
part to the study and whose parents signed the consent form.
In addition, in this study, we considered also 3 healthy adult
subjects (age 26.33 ±1.53, 1 female), on which we evaluated
the feasibility of the protocol before testing on children. They
did not try the P300 BCI system before. The project was
also approved by the Ethics Committee for Clinical Trials of
the Azienda Ospedaliera of Padua. All the experiments were
conducted in accordance with the ethical guidelines of the
1975 Declaration of Helsinki.
B. Semi-autonomous navigation system
The core functionality underlying the navigation system
is to provide an intelligent motion control of the robot
according to the commands sent by the user through the
P300 BCI. The motivation is to reduce as much as possible
undesirable behaviors of the robots that could make tired
and annoyed the child. With this regard, we extended our
previous semi-autonomous navigation based on a shared
control for BCIs and exploiting Robot Operating System
(ROS) [8]. According to our algorithm, the user drives the
robot by sending commands corresponding to change of its
direction and at the same time, the latter performs obstacle
detection and avoidance, computing the best trajectory to
arrive to the goal. More precisely, the robot performs its
default behaviour that makes it moving forward avoiding
obstacles when it is necessary. Whenever the BCI system has
available a new output, it is converted into the corresponding
command. Furthermore, with the aim of making the robot
moving smoothly, it receives a navigation subgoal, that is
continuously updated based on the actions the user wants
that the robot performs. Moreover, to achieve a stronger and
a more reliable navigation, the robot uses a priori knowledge
as well as a dynamic perception of the environment to move.
Thus, the robot receives at the beginning two static global
maps of the environment, thanks to which it localizes and it
is aware about the fixed obstacles. In addition, it estimates
its pose in the environment by fusing both odometry and
the output of a localization module. Please refer to [8] for
further details.
In the new protocol, the user can deliver four steering
commands - GO, STOP/GO BACK, TURN LEFT, TURN
RIGHT - to drive the robot. The commands TURN LEFT
(left image in the P300 interface) and TURN RIGHT (right
image in the P300 interface) makes the robot turn to the cor-
responding direction. When the user focuses his/her attention
on the up or on the down images in the P300 interface, the
user choice is mapped to different resulting actions according
to the previous behaviour performed by the robot and its
current status. In details, when the bottom image is chosen,
it is converted to the STOP/GO BACK command. Precisely,
if the robot is moving (speed different from 0), it has the
function to stop the robot (STOP). Otherwise in the case the
robot is still idle, it actives the GO BACK ACTION that
makes the robot move backwards. This operation represents
a kind of alternative RECOVERY BEHAVIOUR, that can
be chosen voluntarily by the user, for example to correct
commands misclassified by the BCI system or to unblock
the robot when it is not able to reach the current target
subgoal. Whereas, the selection of the image at the top —
GO command — corresponding to the default behavior of the
robot making it going straight. This can be used to reactivate
this behavior after the selection of the STOP/GO BACK
command or in case the robot is moving to keep it.
IV. RESULTS
A. Training phase
In this section we present the results of the classifica-
tion based on the training dataset with children and adults
subjects. In Fig. 4 and 5, we show the average accuracy
over the iteration given in output by the cross-validation
for each subject. The same curves are used by the operator
to understand the performance of the system and to select
manually the number of blocks Nbto be used during the
Fig. 4. Classification accuracy obtained by integrating over the blocks of
stimulus for each adult subject. The dashed curve represents the average
accuracy across adults.
Fig. 5. Classification accuracy obtained by integrating over the blocks of
stimulus for each children subject. The dashed curve represents the average
accuracy across children.
navigation phase according to the presence of the plateau
in the performance. In the case of the adults the average
accuracy across subjects reached the 93.06% ±12.03 after
accumulating over 7 blocks starting from 76.39% ±14.63.
This was even more substantial with the children where the
performance was increased from 58.33% ±17.43 to 85% ±
15.86. With regards to the selection of the number of blocks
Nbduring the navigation, we considered 3 (average accuracy
equal to 94.44%) as a good choice for adults, while in the
case of children we selected generally 5 blocks (average
accuracy equal to 81.67%), by taking into account both the
performance and the time required to deliver commands (it
increases with the increase of Nb). However, as shown in
Table I and II, Nbwas adjusted by the operator according
to the status and the level of attention of each user. In
Table I and II we present both the performance of our
proposed framework based on the evidence accumulation
and the corresponding one that would have been achieved
by classifying after each flash (single epoch classification),
TABLE I
TRAINING PERFORMANCE WITH ADULT SUBJECTS
Subjects Evidence accumulation Single Epoch
Nbchosen Accuracy Chance level Accuracy Recall Specificity Chance level
S1 2 1.0 0.4 0.75 0.93 0.69 0.6
S2 3 0.96 0.4 0.74 0.86 0.69 0.6
S3 3 0.88 0.4 0.66 0.84 0.60 0.6
TABLE II
TRAINING PERFORMANCE WITH CHILDREN SUBJECTS
Subjects Evidence accumulation Single Epoch
Nbchosen Accuracy Chance level Accuracy Recall Specificity Chance level
S4 5 0.67 0.4 0.56 0.78 0.48 0.6
S5 5 0.71 0.4 0.60 0.73 0.56 0.6
S6 5 0.83 0.4 0.63 0.82 0.57 0.6
S7 5 0.92 0.4 0.64 0.88 0.56 0.6
S8 6 1.0 0.4 0.65 0.85 0.60 0.6
with respect to the corresponding chance level [18]. In
particular, as expected, the accuracy is higher when evidence
accumulation is applied: it rose by a factor of about 23% in
both adult and children.
Moreover, to describe better the performance of the classi-
fier, we considered also the recall and the specificity values in
the case of single epoch classification, indicating respectively
the probability of detecting the desired selection and the
probability of correctly rejecting the wrong choices. Overall,
in both adults and children subjects, the recall presented high
values (87.67% ±4.72 in adults and 81.12% ±5.89 in
children), while the specificity appears quite low (66.00 %
±5.20 in adults and 55.4% ±4.44 in children).
B. BCI driven robot navigation
In this section we present the results of the final experi-
ment in which children and adults were asked to mentally
drive the robot from S through the three target positions T1,
T2, T3 consecutively (see Fig. 3). An illustrative video is
available at https://youtu.be/7GJE0aDmkxA.
Among the children subjects, three of them (S4, S6, S8)
took part to the final part of the experiment, making the robot
navigate in the environment through the BCI. Unfortunately
the other two subjects (S5, S7) performed only the training
phase because they did not show up the second day for
finishing the protocol. In addition, S6 and S8 tried to move
the robot going through the three target positions T1, T2, T3
for three times, while S4 performed only one attempt because
he was demotivated by using BCI and too attracted by the
robot. Regarding the control of the robot, we considered the
navigation accuracy computed as number of times each target
position was reached over the number of total attempts across
the subjects (see Fig. 6): the robot arrived in T1 driven by
the children 100% of the total attempts, in T2 the 71.43%
and in T3 28.57%. In addition, regarding the incorporation of
the shared control, we analyzed also the number of the BCI
commands delivered by the subjects and the time necessary
to reach the three targets (in Fig. 6). On average children
delivered 3.00 ±1.15 commands to reach T1, 3.80 ±4.66
for T2 and 2.00 ±1.15 for T3. In terms of time, on average,
68.57 ±27.17 s were taken to make the robot arrive in T1,
84.00 ±82.96 s in T2 and 62.5 ±27.17 s in T3. The high
standard deviation related to the target T2 was due to an
Fig. 6. Navigation accuracy, average and standard deviation of time spent and number of commands sent across the children subjects to reach each of
the target positions T1, T2, T3.
attempt in which a lot of wrong BCI commands were sent
by S6.
Also one adult (S1) controlled the robot through the BCI
along the three target positions and repeated it for three
times, by sending on average 7.00 ±6.08 commands to reach
T1, 2.00 ±0 for T2 and 2.66 ±0.57 for T3. However, in
the P300 BCI the number of sent commands are strongly
dependent on the number of the blocks used (Nb), 2.00 per
adult and 5-6 for children. As regards to the time required
to navigate the robot, in the case of the adult S1, on average
76.00 ±64.11 s were necessary to make the robot arrive in
T1, 23.0 ±1 s in T2 and 28.66 ±9.29 s in T3.
V. DISCUSSION
The main purpose of this paper was to present for the
first time a Brain-Robot Interface to enable children to
mentally drive robots. With this regards, the combination of
an intuitive BCI paradigm and a shared-autonomy approach
allowed also children to control robot only via BCI. Indeed,
although in the current literature, endogenous system such
as Sensorimotor Rhythm (SMR) have been mainly exploited
to successfully control mobile devices [4], [19], [20], in this
work we used an easier BCI paradigm based on a visual
event-related potentials (P300). Surely on one hand, the
endogenous BCI are more appealing because the user decides
when starting the mental task independently of any external
stimulus, on the other, the main advantages of exogenous
BCIs such as P300 is that they are based on a very simple
task suitable also for children and requires a limited training.
Nevertheless, in our Brain-Robot Interface, the slowness,
derived by using an exogenous BCI, is compensated from the
robotics side thanks to the low-level intelligence on board of
the robot. Our system demonstrated that the robot can avoid
obstacles and determine the best trajectory to follow also in
the situation when user cannot deliver new commands.
Since the exploitation of the BCI in children seems to be
in its infancy, comparing our results with the different BCI
systems tested on adults could be meaningless. However, it is
worth highlighting that the results we achieved on adults are
in line with those presented in [13] in terms of classification
accuracy. Both works reported the same trend related to
the increasing of the accuracy by accumulating over the
number of blocks. Our preliminary results encouraged that
3 blocks on average are good choices for adults and only 5
for children.
This particular aspect becomes fundamental when this
kind of BCI is used to control an external device such
as robots. Indeed, by increasing the number of blocks, the
system can detect better the intention of the user reducing
the misclassified commands, but the time requested before
sending a command increases with the risk to get the user
bored and unable to drive the robot. With this regards,
we evaluated also the possibility to design a Brain-Robot
Interface based on a single epoch framework in which after
each flash a new output of the classifier is available and
therefore a new command is sent to the robot [21]. Our
preliminary results showed that in the case of a single epoch
classification, the system seems less robust in rejecting the
response to undesired flashing (low specificity), increasing
the possibility that user sends wrong commands to the robot
and get demotivated especially when he/she is a child. How-
ever, the single epoch classifier presents high recall values,
meaning that it is able to recognise well the presence of P300
wave in the EEG signals when the target is flashed. Therefore
applying evidence accumulation framework provides both a
good detector of the P300 pattern and at the same an im-
provement of the performance by discarding better negative
events. Since the BCI remains a very uncertain channel, the
difficulty to classify the intention of the user when subjects
are children might increase: the P300 pattern in children
appear less defined than in adults (see Fig. 2A) and thus the
accuracy of the classifier was lower. Our preliminary results
highlight the possible advantages to accumulate the evidence
of the subject for a limited number of blocks.
From the point of view of navigation, the proposed Brain-
Robot Interface showed the possibility that also children
can control successfully a robot through BCI. By evaluating
the performance of the navigation in children and adults,
the adjustments we made for simplifying the Brain-Robot
Interface (such as the use of a shared autonomy algorithm
to control the robot, the add of a dual feedbacks to limit the
sending of wrong commands, the introduction of a command
to activate the STOP/GO BACK actions, etc.) are brought
out. Indeed our preliminary results shows that in the case of
children, the time required to reach the target position T1
and T3 was less than in adults. A possible motivation is that
the robot is more autonomous when it is driven by children,
because, as previously explained, in order to simplify the
system, it accumulates the evidence for a higher number
of blocks limiting the number of commands that children
can deliver in comparison to adults. Nevertheless, we found
that the performance related to the navigation were strongly
dependent on the level of attention, in agreement with [22].
Despite these expedients we took into account to sim-
plify and facilitate the control of robots in order to make
the system usable also by children, the protocol might be
improved. Although children were able to drive the robot,
the procedure we adopted and the kind of experiments we
performed might be still complex and too much close to
the typical experimental design exploited with adults. For
example, children involved in the study pointed out that
the training phase was boring. Clearly testing and studying
BCI for children require extra cares than doing experiments
with adults. In this context, this work aimed at promoting
the collaboration of multiple disciplines and therefore it is
inserted into an already active long term project in which we
will evaluate and validate the system in a paediatric context
with the collaboration of neurophysiologists, psychologists
and children experts by involving a big sample of children.
Future directions of the proposed work will consider at first
making the graphical interface more attractive and engaging
in order to keep children attention longer. Furthermore, we
will experiment the possibility to exploit the robot also
during the training phase in order to motivate the children
and create a pleasant atmosphere. In particular, an additional
advancement in this direction will consist to integrate also
some social behavior performed by the robot including
for example communication capabilities. In addition, other
improvements will be addressed to the BCI system. In the
future we are planning to add an additional control based on
the level of the attention of the child in order to avoid the
sending of involuntary commands due to the loss of his/her
concentration.
ACKNOWLEDGMENT
This research was partially supported by Fondazione Salus
Pueri with a grant from Progetto Sociale 2016 by Fondazione
CARIPARO. We thank all the children and their family
taking part to this study for their support in the development
of the project and Dr. Agnese Suppiej, Dott. Roberto Mancin
and Dott. Elisa Cainelli for the collaboration.
REFERENCES
[1] J. Van Erp, F. Lotte, and M. Tangermann, “Brain-Computer Interfaces:
Beyond Medical Applications,” Computer, vol. 45, no. 4, pp. 26–34,
2012.
[2] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen,
B. Kotchoubey, A. K ¨
ubler, J. Perelmouter, E. Taub, and H. Flor, “A
spelling device for the paralysed,” Nature, vol. 398, no. 6725, p. 297,
1999.
[3] F. Gal´
an, M. Nuttin, E. Lew, P. W. Ferrez, G. Vanacker, J. Philips, and
J. d. R. Mill´
an, “A brain-actuated wheelchair: asynchronous and non-
invasive brain–computer interfaces for continuous control of robots,”
Clinical neurophysiology, vol. 119, no. 9, pp. 2159–2169, 2008.
[4] R. Leeb, L. Tonin, M. Rohm, L. Desideri, T. Carlson, and J. d. R.
Mill´
an, “Towards independence: a BCI telepresence robot for people
with severe motor disabilities,Proceedings of the IEEE, vol. 103,
no. 6, pp. 969–982, 2015.
[5] E. Mikolajewska, “Neurorehabilitation in pediatric stroke,Journal of
Health Sciences, vol. 2, no. 3, pp. 023–031, 2012.
[6] J. K. Lynch, D. G. Hirtz, G. DeVeber, and K. B. Nelson, “Report of
the National Institute of Neurological Disorders and Stroke workshop
on perinatal and childhood stroke,” Pediatrics, vol. 109, no. 1, pp.
116–123, 2002.
[7] C.-T. Kim, J. Han, and H. Kim, “Pediatric stroke recovery: a de-
scriptive analysis,Archives of physical medicine and rehabilitation,
vol. 90, no. 4, pp. 657–662, 2009.
[8] G. Beraldo, M. Antonello, A. Cimolato, E. Menegatti, and L. Tonin,
“Brain-Computer Interface meets ROS: A robotic approach to mentally
drive telepresence robots,” in 2018 IEEE International Conference on
Robotics and Automation (ICRA). IEEE, 2018, pp. 1–6.
[9] L. A. Farwell and E. Donchin, “Talking off the top of your head:
toward a mental prosthesis utilizing event-related brain potentials,
Electroencephalography and clinical Neurophysiology, vol. 70, no. 6,
pp. 510–523, 1988.
[10] E. Mikołajewska and D. Mikołajewski, “The prospects of braincom-
puter interface applications in children,” Open Medicine, vol. 9, no. 1,
pp. 74–79, 2014.
[11] C. Guger, S. Daban, E. Sellers, C. Holzner, G. Krausz, R. Carabalona,
F. Gramatica, and G. Edlinger, “How many people are able to control
a P300-based brain–computer interface (BCI)?” Neuroscience letters,
vol. 462, no. 1, pp. 94–98, 2009.
[12] J. R. Wolpaw, N. Birbaumer, W. J. Heetderks, D. J. McFarland, P. H.
Peckham, G. Schalk, E. Donchin, L. A. Quatrano, C. J. Robinson, and
T. M. Vaughan, “Brain-Computer Interface Technology: A Review of
the First International Meeting,” IEEE transactions on rehabilitation
engineering, vol. 8, no. 2, pp. 164–173, 2000.
[13] U. Hoffmann, J.-M. Vesin, T. Ebrahimi, and K. Diserens, “An efficient
P300-based brain–computer interface for disabled subjects,” Journal
of Neuroscience methods, vol. 167, no. 1, pp. 115–125, 2008.
[14] N. V. Manyakov, N. Chumerin, A. Combaz, and M. M. Van Hulle,
“Comparison of classification methods for P300 brain-computer in-
terface on disabled subjects,” Computational intelligence and neuro-
science, vol. 2011, p. 2, 2011.
[15] J. Jin, B. Z. Allison, E. W. Sellers, C. Brunner, P. Horki, X. Wang,
and C. Neuper, “An adaptive P300-based control system,” Journal of
neural engineering, vol. 8, no. 3, p. 036006, 2011.
[16] C. M. Bishop, Pattern recognition and machine learning. Springer,
2006.
[17] D. J. MacKay, “Bayesian Interpolation,Neural computation, vol. 4,
no. 3, pp. 415–447, 1992.
[18] G. M¨
uller-Putz, R. Scherer, C. Brunner, R. Leeb, and G. Pfurtscheller,
“Better than random: a closer look on BCI results,” International
Journal of Bioelectromagnetism, vol. 10, no. ARTICLE, pp. 52–55,
2008.
[19] L. Tonin, R. Leeb, M. Tavella, S. Perdikis, and J. D. R. Mill´
an,
“The role of shared-control in BCI-based telepresence,” in 2010 IEEE
International Conference on Systems, Man and Cybernetics, Oct 2010,
pp. 1462–1466.
[20] J. Meng, S. Zhang, A. Bekyo, J. Olsoe, B. Baxter, and B. He,
“Noninvasive Electroencephalogram Based Control of a Robotic Arm
for Reach and Grasp Tasks.” Scientific Reports, no. 6, p. 38565, 2016.
[21] A. Finke, A. Lenhardt, and H. Ritter, “The MindGame: a P300-based
brain–computer interface game,” Neural Networks, vol. 22, no. 9, pp.
1329–1333, 2009.
[22] M. Carrillo-De-La-Pe˜
na and F. Cadaveira, “The effect of motivational
instructions on P300 amplitude,” Neurophysiologie Clinique/Clinical
Neurophysiology, vol. 30, no. 4, pp. 232–239, 2000.
... Several innovative robotic neuronavigation systems based on industrial robots have been developed and even applied in clinical practice [188][189][190]. In addition, a BRI allowing children to mentally drive a robot was explored, in which P300-based BCI and a shared-autonomy approach are combined to achieve reliable robot navigation [191]. ...
Article
In recent years, brain-based technologies that capitalise on human abilities to facilitate human-system/robot interactions have been actively explored, especially in brain robotics. Brain-computer interfaces, as applications of this conception, have set a path to convert neural activities recorded by sensors from the human scalp via electroencephalography into valid commands for robot control and task execution. Thanks to the advancement of sensor technologies, non-invasive and invasive sensor headsets have been designed and developed to achieve stable recording of brainwave signals. However, robust and accurate extraction and interpretation of brain signals in brain robotics are critical to reliable task-oriented and opportunistic applications such as brainwave-controlled robotic interactions. In response to this need, pervasive technologies and advanced analytical approaches to translating and merging critical brain functions, behaviours, tasks, and environmental information have been a focus in brain-controlled robotic applications. These methods are composed of signal processing, feature extraction, representation of neural activities, command conversion and robot control. Artificial intelligence algorithms, especially deep learning, are used for the classification, recognition, and identification of patterns and intent underlying brainwaves as a form of electroencephalography. Within the context, this paper provides a comprehensive review of the past and the current status at the intersection of robotics, neuroscience, and artificial intelligence and highlights future research directions.
... This aspect limits seriously the potentialities of the robotic side in implementing advanced behaviours. Only a few examples can be found in literature in which this limitation has been faced with more advanced control algorithms [12]- [16]. In these cases, the authors showed that a stronger interaction between the intelligence of the robot and the BMI can help the user in controlling the device. ...
... Evidence from literature Müller-Putz et al., 2011;Carlson et al., 2013) if combined with our findings suggests that the use of hybrid human-machine interfaces, may be the key to develop reliable neurorobotics devices to assist or restore locomotion on a wider population of patients. In fact, they allow to integrate multiple BCI systems and multimodal information, like EEG and EMG signals, as well as information from robotic sensors (Lee et al., 2017;Beraldo et al., 2018aBeraldo et al., , 2019. In this context, the development of software ecosystems capable of integrating multimodal signals and assistive devices (Beraldo et al., 2018b;Tonin et al., 2019) may enhance the possibility for these technological solutions to be used outside a laboratory environment . ...
Article
Full-text available
Despite the advances in the field of brain computer interfaces (BCI), the use of the sole electroencephalography (EEG) signal to control walking rehabilitation devices is currently not viable in clinical settings, due to its unreliability. Hybrid interfaces (hHMIs) represent a very recent solution to enhance the performance of single-signal approaches. These are classification approaches that combine multiple human-machine interfaces, normally including at least one BCI with other biosignals, such as the electromyography (EMG). However, their use for the decoding of gait activity is still limited. In this work, we propose and evaluate a hybrid human-machine interface (hHMI) to decode walking phases of both legs from the Bayesian fusion of EEG and EMG signals. The proposed hHMI significantly outperforms its single-signal counterparts, by providing high and stable performance even when the reliability of the muscular activity is compromised temporarily (e.g., fatigue) or permanently (e.g., weakness). Indeed, the hybrid approach shows a smooth degradation of classification performance after temporary EMG alteration, with more than 75% of accuracy at 30% of EMG amplitude, with respect to the EMG classifier whose performance decreases below 60% of accuracy. Moreover, the fusion of EEG and EMG information helps keeping a stable recognition rate of each gait phase of more than 80% independently on the permanent level of EMG degradation. From our study and findings from the literature, we suggest that the use of hybrid interfaces may be the key to enhance the usability of technologies restoring or assisting the locomotion on a wider population of patients in clinical applications and outside the laboratory environment.
... The brain-computer interface (BCI) is a method by which patients are able to communicate or interact with the surrounding world. Using their brain's electrical activity, people can directly control instruments (e.g., computer, virtual keyboard, virtual-reality environment, speller, robots) [7] without the participation of peripheral nerves and muscles [8]. BCI technology could also be a practical *Research supported by the University of Ferrara, Rotary Club Padova **Corresponding author Agnese Suppiej (agnese.suppiej@unife.it) 1 Department of Medical Science, Pediatric Section, University of Ferrara, Ferrara, Italy. 2 Intelligent Autonomous System Lab, Department of Information Engineering, 3 Department of Woman and Child Health-University of Padua, Padua, Italy. ...
Conference Paper
The P300 is an endogenous event-related potential (ERP) involved in several cognitive processes, apparently preserved between adults and children. In the pediatric age it shows different age-related characteristics. Its application in Brain Computer Interface (BCI) pediatric research remains to date still unclear. The aim of this paper is to give an overview of the maturational aspects of the visual P300, that could be used for developing BCI paradigms in the pediatric age.
... Delay aversion, i.e. the motivation to escape or avoid delay, is a distinctive psychological process underlying the behavioral symptoms and cognitive deficits of ADHD disorder [35]. Some problems of attention were reported in [36]: a kid did not perform BCI task because he was attracted by the robot directly. In fact, the proposed set-up did not allow to watch at both the robot and the BCI stimulus simultaneously: children controlled the robot by looking at a pc display. ...
Article
Full-text available
An instrument for remote control of robot by wearable Brain Computer Interface is proposed for rehabilitating children with attention deficit/hyperactivity disorder (ADHD). Augmented Reality (AR) glasses generate flickering stimuli and a single-channel electroencephalographic Brain Computer Interface detect the elicited Steady State Visual Evoked Potentials (SSVEP). This allows to benefit from the SSVEP robustness by leaving available the view of robot movements. Together with the lack of training, a single channel maximizes the device’s wearability, fundamental for the acceptance by ADHD children. Effectively controlling the movements of a robot through a new channel enhances rehabilitation engagement and effectiveness. A case study at an accredited rehabilitation center on 10 healthy adult subjects highlighted an average accuracy higher than 83%, with Information Transfer Rate (ITR) up to 39 bits per minute. Preliminary further tests on 4 ADHD patients between 6 and 8 years old provided highly positive feedback on device acceptance and attentional performance.
Chapter
This paper introduces a novel form of cooperation between the humans and user-supervised robots that we name shared intelligence. The fundamental principle at the base of shared intelligence is that the user’s commands are equally processed with the robot’s perception in order to create a successful interaction. We investigate a first shared intelligence system to mentally teleoperate a mobile robot via brain-machine interface. The preliminary results promote the introduction of shared intelligence to augment the human-robot interaction without pre-fixing specific constraints (environment-dependent) thanks to the coupling between the human and the robot.
Article
Full-text available
Brain-computer interface (BCI) technologies aim to provide a bridge between the human brain and external devices. Prior research using non-invasive BCI to control virtual objects, such as computer cursors and virtual helicopters, and real-world objects, such as wheelchairs and quadcopters, has demonstrated the promise of BCI technologies. However, controlling a robotic arm to complete reach-and-grasp tasks efficiently using non-invasive BCI has yet to be shown. In this study, we found that a group of 13 human subjects could willingly modulate brain activity to control a robotic arm with high accuracy for performing tasks requiring multiple degrees of freedom by combination of two sequential low dimensional controls. Subjects were able to effectively control reaching of the robotic arm through modulation of their brain rhythms within the span of only a few training sessions and maintained the ability to control the robotic arm over multiple months. Our results demonstrate the viability of human operation of prosthetic limbs using non-invasive BCI technology.
Article
Full-text available
This paper presents an important step forward towards increasing the independence of people with severe motor disabilities, by using brain–computer interfaces to harness the power of the Internet of Things. We analyze the stability of brain signals as end-users with motor disabilities progress from performing simple standard on-screen training tasks to interacting with real devices in the real world. Furthermore, we demonstrate how the concept of shared control—which interprets the user's commands in context—empowers users to perform rather complex tasks without a high workload. We present the results of nine end-users with motor disabilities who were able to complete navigation tasks with a telepresence robot successfully in a remote environment (in some cases in a different country) that they had never previously visited. Moreover, these end-users achieved similar levels of performance to a control group of 10 healthy users who were already familiar with the environment.
Article
Full-text available
The restoring of motor functions in adults through brain-computer interface applications is widely studied in the contemporary literature. But there is a lack of similar analyses and research on the application of brain-computer interfaces in the neurorehabilitation of children. There is a need for expanded knowledge in the aforementioned area. This article aims at investigating the extent to which the available opportunities in the area of neurorehabilitation and neurological physiotherapy of children with severe neurological deficits using brain-computer interfaces are being applied, including our own concepts, research and observations.
Article
Full-text available
Brain-computer interaction has already moved from assistive care to applications such as gaming. Improvements in usability, hardware, signal processing, and system integration should yield applications in other nonmedical areas.
Article
We present a Brain–Computer Interface (BCI) game, the MindGame, based on the P300 event-related potential. In the MindGame interface P300 events are translated into movements of a character on a three-dimensional game board. A linear feature selection and classification scheme is applied to identify P300 events and calculate gradual feedback features from a scalp electrode array. The classification during the online run of the game is computed on a single-trial basis without averaging over subtrials. We achieve classification rates of 0.65 on single-trials during the online operation of the system while providing gradual feedback to the player.