ArticlePDF AvailableLiterature Review

Abstract and Figures

A Brain-Computer Interface (BCI) acts as a communication mechanism using brain signals to control external devices. The generation of such signals is sometimes independent of the nervous system, such as in Passive BCI. This is majorly beneficial for those who have severe motor disabilities. Traditional BCI systems have been dependent only on brain signals recorded using Electroencephalography (EEG) and have used a rule-based translation algorithm to generate control commands. However, the recent use of multi-sensor data fusion and machine learning-based translation algorithms has improved the accuracy of such systems. This paper discusses various BCI applications such as tele-presence, grasping of objects, navigation, etc. that use multi-sensor fusion and machine learning to control a humanoid robot to perform a desired task. The paper also includes a review of the methods and system design used in the discussed applications.
Content may be subject to copyright.
sensors
Review
Brain-Computer Interface-Based Humanoid Control:
A Review
Vinay Chamola 1, Ankur Vineet 1, Anand Nayyar 2,3 and Eklas Hossain 4,*
1Department of Electrical and Electronics, Birla Institute of Technology & Science, Pilani 333031, India;
vinay.chamola@pilani.bits-pilani.ac.in (V.C.); h20180144@pilani.bits-pilani.ac.in (A.V.)
2Graduate School, Duy Tan University, Da Nang 550000, Vietnam; anandnayyar@duytan.edu.vn
3Faculty of Information Technology, Duy Tan University, Da Nang 550000, Vietnam
4
Department of Electrical Engineering and Renewable energy, Oregon Institute of Technology, Klamath Falls,
OR 97601, USA
*Correspondence: eklas.hossain@oit.edu; Tel.: +1-541-885-1516
Received: 25 April 2020; Accepted: 17 June 2020; Published: 27 June 2020


Abstract:
A Brain-Computer Interface (BCI) acts as a communication mechanism using brain
signals to control external devices. The generation of such signals is sometimes independent of
the nervous system, such as in Passive BCI. This is majorly beneficial for those who have severe
motor disabilities. Traditional BCI systems have been dependent only on brain signals recorded
using Electroencephalography (EEG) and have used a rule-based translation algorithm to generate
control commands. However, the recent use of multi-sensor data fusion and machine learning-based
translation algorithms has improved the accuracy of such systems. This paper discusses various BCI
applications such as tele-presence, grasping of objects, navigation, etc. that use multi-sensor fusion
and machine learning to control a humanoid robot to perform a desired task. The paper also includes
a review of the methods and system design used in the discussed applications.
Keywords:
brain-computer interface (BCI); data fusion; nao humanoid; electroencephalography
(EEG); P300; biological feedback
1. Introduction
Brain-Computer Interfaces (BCIs) lie at the intersection of signal processing, machine learning,
and robotics systems. Brain-Computer Interface is a technique that records and processes
the brain signals of a person to perform a desired actuation. Electroencephalography (EEG),
Electrocorticography (ECoG), and Near-Infrared Spectroscopy (NIRS) are a few methods used for
the recording brain signals. However, EEG is one of the most common methods used for BCI
applications [
1
,
2
]. BCI provides an opportunity to develop a new form of communication mechanism
controlled using brain signals. This kind of mechanism becomes extremely helpful for those with
motor impairment [
3
]. For example, applications such as brain-controlled limbs, brain-controlled
chairs, brain-controlled speech systems, etc. can be developed using a Brain-Computer Interface.
Combining this communication mechanism and interfacing with a humanoid robot opens up
several possibilities to replicate human actions. A humanoid robot [
4
,
5
] resembles the human body
in terms of the shape and range of actions it can perform. This makes the humanoid robot a perfect
candidate for receiving the actuation from the brain signals and then interacting with its environment
accordingly. Since a humanoid robot is almost a replica of a human being, it can be controlled
to perform various day-to-day tasks that a human being performs. Thus humanoids have great
potential with a large number of prospective day-to-day applications of which they can perform. Such
humanoids can especially serve as assistants for the disabled by helping them with their daily activities.
Sensors 2020,20, 3620; doi:10.3390/s20133620 www.mdpi.com/journal/sensors
Sensors 2020,20, 3620 2 of 23
Humanoid systems can also be used in mission-critical operations like disaster recovery [
6
,
7
], military
operations [
8
10
], etc. However, the reliability of the system required in such applications is much
more than that of previous applications. Security of such systems is also a major concern. Hence,
there has been growing research in this direction to secure such systems, thereby avoiding their being
hacked and misused [1113].
While designing a BCI-controlled humanoid, the brain-control interface system requires a
translation algorithm to convert the input brain signals to generate control signals for the humanoid.
Traditionally, brain signals were solely taken as an input signal for this purpose. However, at times
it suffered from long training time and poor accuracy. One of the major factors that contributed to
this was the significant variation in the input signal. To improve the performance of such systems,
researchers have actively explored multi-sensor fusion in the past several years. Such systems are
often termed as hybrid BCI systems and they make control decisions based on the fusion of inputs
from various sensors. The use of this multi-sensor fusion has been shown to improve the robustness of
the BCI-based system [14,15]. The major contributions of this paper are as follows:
This paper reviews various applications in which a humanoid is controlled using brain signals for
performing a wide variety of applications such as grasping of objects, navigation, telepresence etc.;
For each of the applications, we discuss the overview of the application, system design, and results
associated with the experiments conducted;
Specifically in this review, we consider BCI applications which use just EEG signals (discussed in
Section 3), applications which use multisensor fusion where in addition to EEG, other sensor
inputs are also considered for execution of the desired task (Section 4), as well as augmented
reality-assisted BCI (Section 5);
To the best of our knowledge, this work is the first review on BCI-controlled humanoids.
The rest of the paper is organised as follows: Section 2discusses the preliminary knowledge
required to understand the paper. Section 3discusses applications where a humanoid robot is
controlled using only brain signals. Section 4discusses humanoid control applications using hybrid
BCI. Section 5discusses a BCI-controlled humanoid application supported by Augmented Reality.
Section 6summarises the applications discussed in the paper. Section 7concludes the paper.
2. Preliminary Knowledge
This section discusses a few preliminary basics that are required to understand the works
described in the paper.
2.1. Brain-Computer Interface
Rehabilitation is one of the major areas where BCI finds its applications. BCI can act as a
communication mechanism for those with motor impairment. In the case of people with motor
impairment, their nervous system is not able to execute as per the brain’s signals. For example,
the brain may think of lifting the left hand, but due to a person’s left hand being paralyzed (on
account of nervous disorders), the hand may be unable to move. However, the signals from the brain
can be directly sensed using EEG electrodes, and can be used to control a robotic arm which may
imitate the lifting of the left arm [
16
]. Various works like [
17
30
] discuss several BCI applications.
Such applications have greatly motivated recent advances in BCI, for it offers new communication
possibilities for those who are paralyzed or suffer from various bodily disabilities. BCI works in
three stages. The first stage involves taking input from the brain, which is generally done using
Electroencephalography (EEG). The second stage consists of a translation algorithm that maps the
input signals from the brain to a predefined output command, and the third stage involves controlling
the external device based on the command [3133].
Next we discuss these three stages in a little more detail. BCI Input (the first stage): This stage
consists of acquiring data pertaining to one or more features of the brain’s activity. Different parts of
Sensors 2020,20, 3620 3 of 23
the brain are responsible for processing different functions. For example, sensory functions related
to vision are processed in the occipital lobe of the brain. Furthermore, the frontal lobe is responsible
for planning, decisions, and making speech [
34
]. Depending on the desired action to be performed,
the EEG sensors can be used to acquire the brain signals from that portion of the brain for further
processing. The second stage namely the Translation Algorithm, takes the acquired brain signals as
the input, and translates them into a specific output command, which could be used for a particular
action. In particular, this stage involves using various classification algorithms like Linear Discriminant
analysis (LDA), Artificial Neural Networks (ANN), etc. (as discussed in Section 2.3) for classifying the
action into a particular category. The key features of the translation algorithm are the transfer function
used, its adaptability, and the control output generated. The transfer function can be linear (e.g., LDA)
or non-linear (e.g., Neural-Network). Adaptive algorithms can use sophisticated machine-learning
algorithms to adapt according to the brain [
35
]. The third stage, BCI Output, deals with the output.
The control output generated for application-specific devices can be of two forms: (i) Discrete or (ii)
continuous. The discrete output is the one that can be used for selection among fixed outputs (e.g.,
letter selection) while the continuous output can help in navigating (e.g., cursor movement) [17].
2.2. Hybrid BCI
Traditional BCI approaches were dependent on just using brain signals for generating output.
However, it is observed that the salient features of the brain signals could differ among various subjects.
In fact, sometimes even for the same subject, the features varied from trial to trial [
36
]. Also analyzing a
single aspect or feature can, at times, lead to missing out important information. These challenges make
the use of machine learning for specifying and extracting features from the signals very appropriate.
Machine learning has been used in various areas of application in the past to solve challenges of
diverse natures [
37
39
] and also find great applicability in solving challenges related to BCI signals.
Machine learning methods have been able to increase the decoding accuracy prominently as discussed
later in the paper. To maximise the robustness of the system, to increase the information transfer rate,
and to decrease the training time, th BCI system records and analyzes multiple complementary signals
[
40
,
41
]. These systems use data fusion techniques and use machine-learning algorithms for the fusion
of complementary signals. This technique is termed a Hybrid BCI, as demonstrated in Figure 1.
Figure 1. Block diagram of Hybrid Brain-Computer Interface (BCI).
Any Hybrid BCI system must fulfil four major criteria, that are as follows [42,43]:
Sensors 2020,20, 3620 4 of 23
1. Brain signals must be used in the BCI System;
2. The user should be able to control one of the brain signals intentionally;
3. The BCI System should do real-time processing of the signal;
4. User must be provided with the feedback of the BCI output.
Generally combinations of a signal used by Hybrid BCI include a mix of Electromyography [
44
]
(EMG) + Electroencephalography (EEG), Event-Related Desynchronization (ERD) along with Steady
State Visual Evoked Potential (SSVEP), Near-Infrared Spectroscopy (NIRS) along with EEG, ERD along
with P300 etc [
45
,
46
]. Table 1lists the description of the major signals and methods discussed above
[4749].
Table 1. Comparative Analysis of Various Methods used for Recording Features.
S.No. Method Description Characteristics
(1)
Electroencephalography
(EEG)
Measuring the electric signals
produced by the human brain
- Commonly used method.
- Safe and affordable
- Poor spatial resolution
(a) Evoked
Signals
SSVEP
Brain signal generated in response
to looking at source having a
specific frequency of flickering
- Training time is short
- Requires continuous attention
for stimuli
- Exhausting for user after
long sessions
P300
Signal generated in response to
an infrequent stimulus, recorded
with a latency of 250–500 ms
(b) Spontaneous Signals Voluntary signals generated
without external stimulus
- External stimuli not required
- Long training required
(2) Electromyography
(EMG)
Measure the electrical activity
produced by skeletal muscles
- Easy to record
- More noise contamination
(3) Electrocorticography
(ECoG)
Measuring the electric signals
by placing electrodes
beneath the skull
- Better signal quality than EEG
- Risky (semi-invasive)
- Less Common
(4)
Functional magnetic
resonance imaging
(fMRI)
Measure changes in the
metabolism of the brain
(e.g., oxygen saturation)
- Good spatial resolution
- Poor temporal resolution(1 s–2 s)
- Sensitive to motion
(5) Near-Infrared
Spectroscopy (NIRS)
- Good spatial resolution
- Poor temporal resolution(2 s–5 s)
2.3. Classification Algorithms
A major requirement of the classifiers in the BCI systems is to ensure good performance in terms
of classification accuracy [
50
]. For example, let us take the case of a patient using a BCI-controlled
wheelchair. Now suppose they have the facility to control the BCI wheelchair by taking it left, right,
front, or back based on their thoughts. So when they think that the wheelchair should move left, the
BCI system should be able to process the brain signals appropriately and must classify the action
to be ‘move left’. This classification algorithms have the task of taking multiple features (e.g., brain
signals) as an input and to distinguish between different classes (e.g., left, right, front, back in the
example given here). In performing this task, it is important to choose features carefully so that the
classification algorithm can significantly differentiate between the multiple classes [
51
]. The feature
that acts as an input to the BCI system for controlling humanoid robots are of two types: (i) Temporal
features or (ii) frequency features. Temporal features represents the amplitude of the generated signals
with time, whereas frequency features represent the frequency power spectra of the signals. Generally,
P300-based BCI uses temporal features whereas ERD- and SSVEP-based BCI uses frequency features.
Classification: Different classifiers are used to translate the features extracted from brain signals
to control commands [
52
55
]. These classifiers range from the simplistic linear classifiers to complex
Sensors 2020,20, 3620 5 of 23
non-linear classifiers. Some of the commonly used classifiers are: (i) Linear Discriminant analysis
(LDA), (ii) Support Vector Machines (SVM), (iii) Artificial Neural Networks (ANN), and (iv) Statistical
classifiers [56]. These classifiers are discussed in detail below.
Linear Discriminant analysis (LDA) [57]:
LDA is a type of linear classifier. The major benefits
of using LDA is that: (i) The computational complexity of LDA is less, and hence the time taken
for the classification is reduced. This is useful when using the algorithm in an online session as
discussed later. (ii) LDA is a simple classifier to use and visualise. Linearity can be a limitation while
handling non-linear EEG data. On the other hand simpler techniques like LDA are suitable when
small training data set is available. LDA is used in a number of BCI-controlled humanoid applications
for classification. Typical decision boundary of LDA is shown in Figure 2. For LDA, decision boundary
are singly connected and convex. Figure 2denotes 3 class classification in which the colour of the
region denotes the class being predicted.
Figure 2.
Decision boundaries for the different classifiers (Linear Discriminant analysis (LDA), Support
Vector Machines (SVM), and Artificial Neural Networks (ANN)).
Artificial Neural Networks (ANN) [58,59]:
ANN is a type of non-linear classifier. The classifier
is inspired by the neuron structure of the brain. It is used to approximate non-linear functions. Using
ANN is generally computationally intensive and requires a number of parameters to be configured.
It is more complex in terms of usage as compared to LDA and the computational time taken to
generate the output is also longer. However, ANNs are highly adaptive and can be applied on a
wide variety of use-cases. Unfortunately, ANNs are prone to over-fitting, and thus the selection of
the parameters/architecture and regularisation needs to be done carefully. The decision boundary of
ANN can be seen in Figure 2, the non-linearity of the function is evident from the figure. The figure
shows two classes, one represented using red colour and the other one using a blue colour that has
been classified using ANN.
Support Vector Machines (SVM) [57,60]:
SVM is also a non-linear classifier. However, while
using SVM, setting up of the configurations is not needed. It is useful in cases when the training
data is less. Most of the time it generalises better. This makes its use advantageous for BCI systems
as the classifiers once trained, classify brain signals for multiple sessions. The features generated
during multiple sessions may vary even for a single user. Hence the models which are less sensitive to
over-fitting may perform better. SVM also performs well with high dimensionality data. However,
SVM are sometimes slower than other classifiers, which becomes an issue while dealing with large
data. Decision boundary with maximising margin between the classes is shown in Figure 2.
Statistical Classifiers:
These classifiers [
61
] use posterior probabilities to select the class that has
the highest probability based on the input features of every new instance. This type of classifiers
utilise prior knowledge to classify instances. These classifiers also perform well in case of uncertainty,
which is expected when dealing with brain signals. Uncertainty of the signals can be caused by fatigue
or learning effects.
Table 2summarises the typical classifiers that are applied in BCI.
Sensors 2020,20, 3620 6 of 23
Table 2. Comparison of classification algorithms.
Classifier Mechanism Properties Choice Consideration
Linear
Discriminant
Analysis
(LDA)
Decision boundary is made
by maximising the mean
among two class and
minimising the variance
inside each class.
1) Simple
2) Less computational
3) Decision boundary is linear
- Suited for online sessions
- Smaller training set
Artificial
Neural
Networks
(ANN)
Minimises the error in
classifying training
data by adjusting weights
of neural connections
1) Many parameters to set
2) Highly computational
3) Decision boundary is non-linear
4) Prone to overfitting
- Suitable for variety of
applications
- Sensitive to noisy data
Support
Vector
Machines
(SVM)
Decision boundary
maximises the
margin between two class
1) Decision boundary can
be linear or non-linear
2) Less prone to overfitting
3) High computation for
non-linear cases
- Appropriate for high-
dimensional data
- Less sensitive to
noisy data
Statistical
Classifiers
Estimates probability
corresponding to each class
and selects the class having
the most favourable possibility
1) Decision boundary is non-linear
2) Efficient for uncertain samples.
- Suited as adaptive
algorithm
- Considers variation in
brain dynamics (e.g., fatigue)
2.4. Humanoids
A humanoid robot is a robot with a body structure and features similar to that of a human.
Three main primitives for a humanoid robot are sensors, planning, and control. Humanoid robots
generally have proprioceptive sensors to sense the position and exteroceptive sensors to get data
on what is being touched. Actuators in humanoid robots mimic the action of muscles and joints.
Following is a list of the humanoid robots, which have been commonly used in BCI-controlled
humanoid applications in the recent past as shown in Figure 3. NAO (Nao Humanoid) humanoid [
62
],
which is developed by Softbank robotics is one of the most commonly used and is actively used for
research and educational purposes.
Figure 3. Humanoids: (a) NAO (Nao Humanoid), (b) HRP 2, (c) KT-X, and (d) DARwIn-OP.
1. Nao Humanoid (Softbank Robotics) [62];
2. HRP-2 Humanoid (Kawada Industries) [63];
3. KT-X Humanoid (Kumotek Robotics) [24];
4. DARwIn-OP (Robotis) [64].
Sensors 2020,20, 3620 7 of 23
In general, the humanoid robots in the list above have the following set of characteristics:
1. 17–30 degrees of freedom;
2.
Multiple sensors like gyroscope, force sensors, etc. on different body parts like head, torso, arms, legs;
3. Microphones and speakers to interact with humans;
4. Two cameras for object detection and recognition (in NAO);
5. Custom application development due to open architecture.
Figure 4gives an overview of the BCI-controlled humanoid applications discussed in the paper.
Majorly, P300 signal is used in these applications as it gives high accuracy [48,65].
Figure 4. Overview of applications.
3. BCI-Controlled Humanoid Applications Using Only EEG
In this section, we discuss various BCI-controlled humanoid applications that use only the
EEG signal as an input. The EEG input is processed and translated to an appropriate control
output. Specifically, we consider three applications, namely grasping a glass of water, telepresence,
and museum guide application using the BCI-controlled humanoid. These applications are discussed
in the following subsections one by one. For every application, we provide an overview, system design
description followed by the salient results associated with the conducted experimentation.
3.1. Grasp a Glass of Water using NAO (Type: Rehabilitation)
Overview: This application [
66
] involves using a BCI-controlled humanoid to grasp a glass of
water. This kind of application can be helpful for people who may find difficulty in performing such
a task because of their age or a serious medical condition like Amyotrophic Lateral Sclerosis (ALS)
disease. Note that ALS patients depend completely on caretakers for their daily needs. Scientists and
researchers have always been actively looking forward to developing technologies to help such patients.
A promising technology in this direction is the use of BCI-controlled humanoid robot. The authors in
[
66
] use an EEG-based approach to capture the brain’s activity, which is recorded through electrodes
implanted in cortical neurons. The signals were processed to actuate the humanoid to fetch the water.
Salient state changes in their system are shown in Figure 5. The experiments for the BCI humanoid
control for this task were performed by both healthy individuals as well as those suffering from ALS,
and they was divided into multiple sessions, namely: (i) Calibration Session, (ii) Online Session, and
Sensors 2020,20, 3620 8 of 23
(iii) Robotic Session. The purpose of dividing the experiment into multiple sessions was to tune the
signal processing parameters as well as the classifier before performing the actual task in the Robotic
Session. This is necessary because the parameters are dependent on the subject performing the tasks.
This also helps the subjects to get familiar with the system. Description of each session is given in
Table 3. Note that in Table 3, the threshold refers to the percentage of correct command selection that is
required to transition from one session to the next one. Feedback indicates whether the visual feedback
about the correctness of command was provided in the session. Accuracy is the ratio of correctly
executed commands to the total number of commands. In this experiment an ERP approach known as
the oddball paradigm [
67
] was used, which uses visual evoked potential. The oddball paradigm is
an experimental design in which the subject is exposed to a sequence of repetitive stimuli which is
infrequently interrupted by a deviant stimulus. The reaction of the subject to the oddball stimulus is
recorded. In this case study, oddball paradigm is used to identify the infrequent visual stimuli that are
elicited by highlighting the grid in the User Interface UI (Figure 6) of user’s interest. The P300 brain
signals are eminent after approximately 300 ms of the stimulus.
Figure 5. State diagram of process (adapted from: [66]).
Table 3. BCI Sessions used in [66].
Session Trials Threshold &
Feedback Purpose Accuracy (In %)
(Mean ±Standard Deviation
Calibration 9 100%
No Feedback
For tuning signal
processing parameters -
Online 20 55%
With Feedback
Train the
classifier
Healthy: 74.5 ±5.3
Amyotrophic Lateral Sclerosis (ALS) Patient: 69.75 ±15.8
Robotic 10 N.A.
With Feedback
Robot Executes the
selected command
Healthy: 72.4 ±9.4
ALS Patient: 71.25 ±17.3
System design: The system consisted of three major components. These were the user interface,
the network interface, and the robotic system. The user interface used was a 3
×
3 matrix, as shown in
Figure 6. Each grid in this figure represents an action performed by the humanoid. The interface shows
two types of commands. The first set of commands are to control the movement of the humanoid robot
in the environment, i.e. (forward, backward, turn, etc.) and the second set of commands are to grasp
and give items. The grids showing the hand icon in Figure 6correspond to the grasp and give actions,
while the rest of the grids correspond to different movement commands. BCI data acquisition system,
along with the user interface, collect the EEG signal using a g.USBamp EEG kit digitalised at 256 Hz.
Various filters like notch and Butterworth filter were used to strengthen the signal and to remove the
noise. The machine learning algorithm used for classification was stepwise LDA using the One vs Rest
approach. The One vs Rest approach takes one class as positive and the rest as negative and trains the
classifier. The One vs Rest approach was used for selecting the class with the maximum distance from
hyperplane compared to all the other classes [
66
]. The network interface passed the commands from
the BCI system to the robotic system. The application part was completely dependent on the robotic
system, which allowed two types of control modes. Both modes are illustrated in Figure 7.
Sensors 2020,20, 3620 9 of 23
Figure 6.
3
×
3 matrix showing user interface. (adapted from: [
66
]). (
a
) Teleoperated Mode: User gives
directional command using only arrows and (
b
) Autonomous Mode: User gives high-level commands
corresponding to the symbol.
Figure 7. Autonomous and teleoperated mode (adapted from: [66]).
Teleoperated Mode: In this mode, the user controls the movement of the robot and also gives
commands to grasp and give a glass of water;
Autonomous Mode: In this, the user would just give abstract commands and the humanoid plans
its actions according to the state.
Results: The experiment showed that the BCI system, along with humanoid robots, can be
effectively used by ALS patients with a mean accuracy of 71.25% in robotic session. Additionally, one
of the interesting observation about the experiment reported by the authors was that the experimental
setting (i.e., experiment conducted at home or with lab setting) did not affect the control
performance significantly.
3.2. Telepresence by Humanoid Using P300 Signal (Type: Entertainment)
Overview: The application discussed in the previous section was simpler in terms of the actions
performed, but provided a granularity of control that is sometimes not desired at the user level.
This section discusses one such application in which a person is able to interact with the world using
telepresence through a humanoid [
68
]. The control commands to be given to the humanoid in this case
are high level, i.e. humanoid perform several subtasks that are grouped together and denoted as one
high-level task (event, a few of such events can be seen in Figure 8a). Two major techniques used for
the implementation of this application were (i) programming by demonstration in which the robot
Sensors 2020,20, 3620 10 of 23
learns a task by observing someone performing it, and (ii) BCI-based control in which the brain signal
generated by the visual stimuli is converted to control signals by classifying the P300 signal generated.
In this experiment, similar to the previous experiment (i.e., Section 3.1), the complete process was
divided into two sessions illustrated in Table 4. The two sessions are namely: (i) Calibration session
and (ii) real-time operation. The part of training the classifier was performed in the calibration session
using the same EEG data, which in the previous case-study was performed in a separate session named
online session. This experiment also used the oddball paradigm method for elicitation of the brain
signals. However, as compared to the previous case study, the number of commands were increased to
16. All the commands used are high level, and are depicted in Figure 8a. The purpose of doing that
was to remove the complexity of the humanoid control from the user end. Logistic regression was
used for the classification of signals. It was used to train the function for predicting the output into the
target or non-target events [
68
]. For the validation of the trained model, the subjects were asked to
control the humanoids by brain signals. The set of tasks to be performed were pre-decided.
Table 4. Experiment sessions used in [68].
Session Trials Feedback Purpose Accuracy (In %)
Calibration 5 With
Feedback
Tune Signal Processing
Parameters & Train Classifier -
Real-Time - With
Feedback
Control the Humanoid
Robot. 78
System Design: Figure 8b shows the abstract system design of the entire system. Some of
the functionalities from the actual architecture have been grouped in the diagram to focus on key
components. FieldTrip buffer is the main driver of the whole architecture, and it manages both, the BCI
system as well as the NAO system. It also stores the BCI model. The subject uses the Graphical User
Interface (GUI) to generate brain signals recorded using, g.USBamp, g.LADYbird with 256Hz sampling
frequency and 16 bit resolution. Signals are passed on to the BCI module for either tuning/training the
model or for classification.
Figure 8.
(
a
) 4
×
4 Grid showing high-level commands and (
b
) abstract system pipeline for telepresence
(adapted from: [68]).
Results: During the calibration session, the model is trained and stored in the buffer. During
real-time operation, the stored model is used to classify signals. Based on the classification, the events
are generated and passed onto the NAO humanoid as control commands. The feedback of the same is
shown on the user’s screen. The system achieved a real-time accuracy of 78% on average.
3.3. BCI Operated Museum Guide (Type: Entertainment)
Overview: This application [
69
] uses a remotely controlled robot that was operated by a healthy
or paralysed person through BCI. The aim is to use the robot as a museum guide that will send remote
Sensors 2020,20, 3620 11 of 23
visuals to the person operating it through BCI. In the application, the person could use the P300
signals to control the navigation of the robot. This provided the user with a perception of telepresence,
similar to the previous case study. Note that although the authors did not use humanoid in their case
study, a humanoid could very much be used in such an application, and thus the case study has been
included. In this experiment, more focus was given on the GUI used in the BCI system. The GUI is
different as it is more friendly for the user and is not aligned as a grid, like the UI used in previous case
studies. The proposed BCI system used the P300 brain signal and the details about the BCI sessions
are not discussed. In the new GUI, the selection of command was done by focusing on the flashing
navigation arrow. This is similar to the oddball paradigm used in earlier experiments. To simplify
the UI, the authors divided the process of selection into two parts. Each part has a different P300
elicitation interface. The first part is before starting with the input phase. In this, the user was asked
to select between the two robots: Peoplebot and Pioneer3 depending upon the location they want
to visit. In the application discussed, Peoplebot was located in the Computer Science department,
and Pioneer3 was located in the Botanic garden. Both the robots were equipped with wheels for
movement, micro-controller, IR sensors, sonar rings for avoiding collision and a camera. In general,
the first part could be considered as a selection among two robots, Robot 1 and Robot 2, which were
located at two different locations. The user could select the robot as per their preference to visit a
location as shown in Figure 9a. After the selection of the robot, the navigational instruction was given
using a screen, as shown in Figure 9b. The arrows represent the direction of the robot’s movement,
which was continuous, and could be stopped using the stop button. All this was controlled using
the brain signals based on P300. The screen in the middle displays the output generated using the
robot’s camera.
Figure 9.
(
a
) Robot selection menu, (
b
) navigation screen, and (
c
) two views for the user (adapted
from: [69]).
System Design: The communication pattern between the robot and BCI System follows
client-server architecture and Transmission Control Protocol / Internet Protocol (TCP/ IP) is used in
the network stack. Robot plays the role of the client, and the BCI system acts as a server. Initially, the
robot tries to establish a connection with the BCI System and waits for the command to be executed.
The BCI Architecture converts the signal from the brain into the corresponding command; the server
then sends the command to the client program running at the robot end. The robot can handle three
types of commands in general: (i) Start Session Command, (ii) Execution Command, and (iii) End
Session Command. When the client-server connection is established “Start" command is received
by the robot which enables direct control of robot through brain signals. This control is stopped by
Sensors 2020,20, 3620 12 of 23
receiving the “End" command. At the server end, after sending the command to be executed, the server
waits for the action to be executed. If the action is done, the server will get the result of the action from
the client. However, if the command is not correct, the client will send a warning command to the
server, and the server will respond by the same command.
Results: Using this application, a person could visit the museum through the robot because of
telepresence. It was possible to simulate where the robot walked with the help of a two-dimensional
map. The person could see the FOV (Field of View) of the robot’s camera with the help of a graphical
user interface shown in Figure 9c and then decide the next displacement. Path planning could be done
to avoid the sensor’s errors.
4. BCI-Controlled Humanoid Applications Using Hybrid BCI
In this section, in addition to the brain signals recorded using EEG, the control command is also
dependent on complementary signals generated by some other parts of the body. We discuss two case
studies in this section.
4.1. Picking Objects Using Neuro-Biological Feedback Fusion (Type: Rehabilitation)
Overview: The application [
70
] discussed in this section is similar to the one in which glass of
water is fetched. However, the major difference is that this uses multi-sensor data for classifying
the control commands. The authors discuss a new method for a human-humanoid interaction for
ALS-affected patients. The authors make use of the biofeedback factor, which depends on the user’s
intention, attention, and focus. This was then used to recognise the user’s mental state, based on which
the robot was directed to do certain tasks.
The task performed in this application is very similar to [
66
]. Similarity can also be seen in the way
the experiment was divided into Training Session, Online Session, and Robotic Session as discussed in
Table 5. These sessions were combined with the biological feedback to support the decision making
based on a certain threshold. The biological factors were used as it provides the mental state of the
user. The architecture uses a combination of EEG signals which are elicited using visual stimuli along
with a tracker that tracks the user’s eye movement. This biofeedback based system is used to extract
features such as attention, intention and focus. Figure 10b shows the actual workflow. The task of the
experiment was to grasp a glass of water.
Table 5. Experiment sessions used in [70].
Session Trials Threshold &
Feedback Purpose Success Bio-Feedback
Factor
Calibration
Till 100%
correctness
(Avg. : 3)
100%
No Feedback
Calibrate BCI System
over the neural response - -
Online 10 -
With Feedback
Select the command
with visual feedback
Healthy: 100%
ALS: 97.22%
Healthy: 78.15%
ALS: 79.61
Robotic 5 -
With Feedback
Select the command
with robotic feedback
Healthy: 100%
ALS: 96.97%
Healthy: 75.83
ALS: 84.25
System Design: NAO humanoid is used along with BCI system that includes a bio-signal
amplifier which is used to convert the user’s brain signals into digital form and a tracker which tracks
the location of the focus of user’s eye as shown in Figure 10a. Components of the System are as follows:
1. BCI system:
Visual Evoked Potentials (VEPs) and P300 are used. Oddball paradigm is used for eliciting ERPs.
The salient features of the system were as follows:
Sensors 2020,20, 3620 13 of 23
Signal Processing: g.USBamp device was used for recording the signals, using 10–20 standard
system. The signal was digitised at 256 Hz. Butterworth filter was used to reduce the artefacts.
A temporal filter was also used to average the samples in order to reduce the noise. In this study,
6 epochs each with a window of 800 ms were used.
Feature extraction: Fisher’s stepwise Linear discriminant is used during the training to configure
according to the user’s brain. LDA was used to differentiate the different classes by using
hyperplanes. In this application, LDA calculates the stimuli recorded for every action on the grid
and then selects the most prominent action corresponding to the grid.
User Interface: It is similar to the 3
×
3 grid, which was used in [
66
] (Figure 6). Low-level
behaviours include controlling all the possible directional movements of the humanoid. However,
high-level behaviours include issuing control commands like holding some item and giving the
held item, similar to the ones considered in [66].
2.
Biofeedback system uses neurological states and gaze: The biofeedback system takes into account
the user’s eyes and brain activity. It includes four parameters—Mental intention, attention, visual
focus, and stress. An action is executed only when the biofeedback factor (B
f
) is greater than 60%.
The various modules associated with the bio-feedback system are explained below:
Attention module: Since there are nine commands, Fisher’s Linear Discriminant (FLD) is used with
one versus rest approach. The attention is expressed in percentage and is it based on the power of
P300 waves measured during performing the task.
Intention module: Correlation factor of the P300 wave is used to measure intention. It is based on
the precision of the system.
Visual focus module: It is calculated by evaluating the user’s gaze by eye-tracking, as shown in
Figure 10a. Here F
c
represents the central focus, F
l
is the lateral focus, and F
o
is the outer focus;
all values are in the form of a percentage.
Entropy module: Stressful Condition corresponds to high entropy in brain signals.
Signal processing steps are performed to extract the normalised value of the entropy. Finally value
Bfis calculated by taking a weighted average of attention, intention, and visual focus values.
3.
Connection of the subject to the robot: For receiving commands from the BCI, User Datagram
Protocol (UDP) connection is made to the control interface. Connection to the robotic system is
made through TCP/IP socket for reliability.
4. Controlling the behaviour of the robot. Two control modes are proposed by the authors:
Navigation mode: NAO can move in 6 ways namely walking (front & reverse direction) ,
turning ( left & right), and rotating (clockwise & anti-clockwise).
High-level mode: It includes complex tasks like holding on to an object, and giving the object
to the user after identifying the user’s location.
The distance metric (O) is also used to avoid collisions based on a threshold value. If distance
metric is less than the threshold value, then is considered safe to execute a command. Once that
is ensured, corresponding to that an reaction safe command is activated along with the biological
factor B
f
and
O
which is passed to function which finally executes the command R
k
that
corresponds to the control command.
Results: In the experiment, the biological factor represents the mental state of the user. The average
value of attention, visual focus and intention for healthy users during the online session were 74.59%,
99.03%, and 43.52%, whereas for ALS users the values were 76.70%, 90.81%, and 63.01%. During the
robotic session the average values for these parameters for healthy users were 69.60%, 98.49%, and
42.98%, and ALS users achieved 79.45%, 96.16%, and 70.03% respectively. The attention and intention
value for ALS users was better than healthy users. The B
f
value also increased in the robotic session
Sensors 2020,20, 3620 14 of 23
for ALS users. This denotes that the presence of robot in the robotic session acts as a positive feedback,
particularly for ALS users, supporting studies like [
71
,
72
]. The same can also be attributed to better
attention and intention among ALS users.
Figure 10.
(
a
) Eyeball tracking in grid cell , (
b
) flow chart of the system using neuro-biological fusion
(adapted from: [70]).
4.2. Humanoid Control using Facial Signals (Type: Entertainment)
Overview: This application [
73
] uses three types of bio-electric potentials, i.e. EOG (electric
potential generated by eye movement), Glossokinetic Potential (GKP, the electric signals originated
by tongue movement), and EMG. Although the application discussed here uses these three signals,
as an EEG-based system is used for signal acquisition. Thus, the BCI data also can be made use
of. With that integration, the system can utilise all the electric potentials generated from the entire
head region. Application designed can identify two types of tongue movements, i.e. left-to-right and
right-to-left, and two kinds of horizontal eye movements similar to the tongue movements, along with
these two teeth-clenching movements generate EMG signals that are also used. By analysing these
electric potential signals recorded from different parts of the face, a two-level interface is controlled.
Eye movement selects a generic task category whereas the tongue movement selects a specific task
from the category. Finally, teeth clenching executes the task. In the application, authors developed a
mechanism that can detect and distinguish between the tongue and eye movements, and differentiate
the direction of the movement of either tongue or eye. Basically, this means there are four types of
movements which have to be distinguished accurately. These types are namely: (i) Tongue (left to
right), (ii) tongue (right to left), (iii) eye (left to right), and (iv) eye (right to left).
System Design: The experiment consisted of two phases, training and online. Table 6consist of
more details. For the training part, both eye and tongue movement were recorded for seven rounds
(trials). g.Mobilab device was used for recording. This device has the facility of recording EEG, EOG,
EMG, and GKP singals in this experiment. The signals were digitised at 256 Hz and filtered above
0.5 Hz using the high-pass filter.
Table 6. Phases of experiment in [73].
Session Trials Purpose Accuracy
Training 7 (Eye
& Tongue)
To train the
detection model -
Online 1 To evaluate the performance
of the system 86.7 ±8.28%
Sensors 2020,20, 3620 15 of 23
For eye-movement, auditory cues were used to guide the user, whereas visual cues were used in
case of tongue movement. A RBF-SVM (Radial Basis Function-SVM) model was trained for classifying
the four kinds of movement. It was used because it has an enclosed decision boundary and can be used
to reject irrelevant artefacts generated due to the motion of the electrodes. The distinction between
tongue and eye movements was obtained using PCA based feature extraction. For the online part,
the authors evaluated the experiments in terms of: (i) Performance (accuracy and response time), (ii)
task execution (this method has been extensively used in other case studies as well for evaluation,
in which the user is asked to perform a set of tasks on the robot), and (iii) workload (to measure
qualitative parameters). Figure 11a shows the two-level hierarchical menu displayed on the user screen
to allow them to control the interface, as shown in Figure 11b. All the similar tasks are grouped in
the two-level interface under a category. By default, the task in the category at the central position
of the screen is highlighted which can be executed by a teeth-clenching movement resulting in the
generation of EMG signal. For navigation among the categories, eye movements (left to right) and vice
versa are used. Furthermore, for navigation within a category, tongue movements (left to right) and
(right to left) are used. Eye: Left to right movements moves the category selection in the clockwise
direction, whereas the right to left movement will move in an anticlockwise direction. Within task
categories, a specific task was selected by the tongue movements. After the selection of the task was
made, the execution was done by teeth clenching movement. All the categories and one of the task
used in [73] along with the transitions are shown in Figure 11b.
Results: The mean accuracy of the system was 86.7
±
8.28% with an average response time of
2.77 ±0.72 s
. This scheme can be supported with facial recognition for expression recognition [
74
] and
can be integrated with some of the action commands to increase robustness.
Figure 11. (a) Menu for selecting task, and (b) state diagram (adopted from: [73]).
5. Application Using BCI Supported by Augmented Reality (AR)/Virtual Reality (VR)
In this section, the application discussed uses augmented reality to create a sense of embodiment
and is used to have greater control over the environment.
5.1. Navigational Assistance using AR & BCI (Type: Rehabilitation)
Overview: In this application discussed in [
75
], a novel navigation scheme is presented to control
a humanoid through BCI enabling it to interact with the environment. SSVEP signals are used in
this study. For interaction with humans, a high level of accuracy is desired. This is achieved using a
sequence of manual and automated phases presented in the assistive navigation scheme. HRP-2 robot
is used in this demonstration.
The authors focus majorly on demonstrating a new navigation scheme that is assisted with a
Head-Mounted Display (HMD) to increase the sense of embodiment by displaying the robot’s camera
Sensors 2020,20, 3620 16 of 23
video feed to the user. The humanoid control is done by generating control commands using the SSVEP
paradigm. The elicitation of SSVEP is also done with the help of HMD. The navigational assistance is
achieved by executing a sequence of manual and automated phases. In general, the selection-based
phases are assigned to the user, whereas navigation and interaction-based tasks are automated to
achieve high-level accuracy while interacting with humans.
System Design: The experiment [75] is divided into five phases, as shown in Figure 12.
Figure 12. State diagram of assistive navigation (adapted from: [75]).
Major characteristics of these phases are listed below:
1.
Manual navigation phase—This is a manual phase that requires the task to be performed by a user.
The phase is limited to the user locating himself using the robot’s camera. The output of the
camera is visible in the HMD;
2.
Body part selection phase—This phase is also performed by the user manually. In this phase,
the user selects the body part which the humanoid robot is expected to interact with;
3.
Assistive navigation phase—This is an automated phase. The Robot uses SLAM [
76
] to navigate
towards the selected body part. The experiment also shows that this kind of navigation is better
because of the difficulty associated with manual navigation which causes errors in navigation
along with slow execution of the task;
4.
Interaction selection phase—This is a manual task. The user selects the type of interaction on the
body part selected;
5.
Interaction phase—This is an automated phase. The humanoid performs minor adjustments to
perform the interaction. In this particular application, a user’s arm is touched. But in general,
any task can be configured in the humanoid, and it will execute the task when triggered.
The navigational assistance system consists of a HMD which is responsible for displaying live
video feed and for the elicitation of SSVEP signals to generate control commands. AR markers were
placed on the HMD and user arms which helps in performing the automated phases. As shown in
Figure 13a, SSVEP was evoked by flickering the body parts, which was used for body part selection
by the user. g.USBamp was used to acquire the data with a sampling rate of 256 Hz combined
with band-pass filter (0.5–30 Hz) and notch filter (50 Hz). Similarly, SSVEP was evoked during the
interaction selection phase as well. Finally, as shown in Figure 13b the robot adjusted itself by small
steps. The robot initiated the action when it reached a comfortable pose.
Results: The task for this application was touching the user ’s arm, as shown. The system operated
at an accuracy of more than 80% with a training of about 6 minutes.
Sensors 2020,20, 3620 17 of 23
Figure 13. (a) SSVEP for arm selection and (b) interaction phase (selected arm is touched).
6. Summary of Applications
In this paper, we discussed various applications that deal with controlling a humanoid with
the help of BCI signals. These experiments were performed using various humanoids and different
translation algorithms were used to generate the control signals. Table 7presents the summary of the
studies considered in this review.
Table 7. Summary of applications.
Name Related
Works
Used
Signal Classifier Humanoid
Used Description
Fetching Water
(Rossella et al.,
2017) [66]
[7780]P300 Stepwise
LDA
NAO
Humanoid
Humanoid fetches a glass of water
for a patient using BCI-P300
Telepresence
(Batyrkhan et
al., 2018) [68]
[8187]P300 Logistic
Regression
NAO
Humanoid
A user can interact with the world
remotely using humanoid
controlled by BCI
Museum
Guide (Antonio
et al., 2009) [69]
[88,89] P300 N.A. PeopleBot &
Pioneer3
A user can control a robot to visit
a museum remotely
Picking Object
(Bio-Feedback)
(Rosario et al.,
2018) [70]
[9093]
P300 +
Eyeball
Tracking
Stepwise
LDA
NAO
Humanoid
Picking & placing objects.
But control signals are
generated based on
bio-logical feedback & brain signal
Control by
Facial Signal
(Yunjun et
al., 2014 [73]
[9499]
EOG,
EMG,
GKP
SVM NAO
Humanoid
Humanoid is controlled by
facial signals which do not depend
on spine for signal delivery
Navigational
Assistance
(Damien et al.,
2014) [75]
[100106]SSVEP N.A. HRP-2
Humanoid
A navigational scheme is presented
to have greater precision while
performing action using humanoid
7. Conclusions
BCI has emerged as a new communication system and is an active field of research. This paper
discussed BCI-controlled humanoid applications of three kinds: a. The ones using just EEG signals,
b. using Hybrid BCI, and c. Augmented reality-assisted BCI humanoid control. Section 3discussed
Sensors 2020,20, 3620 18 of 23
three applications that make use of P300 signals as an input for classification. These signals were
generated using a grid like user interface denoting different actions. Section 4covered two application
which combine input from multiple sensors to increase the robustness of the system. The application
performed in Section 3.1 and Section 4.1 are similar. However, the application in Section 4.1 used
neuro-biological feedback to accomplish the task, and had better accuracy on account of using multiple
inputs. Application in Section 5used augmented reality to demonstrate a navigation scheme that could
be controlled from a head mounted display. Most of the applications discussed in this paper deals with
increasing the quality of life of a person with paralysis or motor impairment, though it could also be
beneficial for a healthy person in some cases. Current applications have experimented with objectives
ranging from, accompanying a patient to fetch a glass of water using humanoids to using augmented
reality for humanoid control. Major issues faced while implementing each of the applications was the
process of training and calibration which takes time. Most of the complimentary techniques deal with
reducing the training time and improving the online accuracy while performing the action. This paper
reinforced the fact that BCI could be used to control the humanoid with a good amount of accuracy.
In most of the applications discussed, this was achieved by dividing the experiment into phases and
having an initial training phase to tune the model according to the subject.
Author Contributions:
Conceptualization, V.C. and A.V.; Methodology, V.C. and A.V.; software, V.C. and A.V.;
validation, V.C. and A.N.; formal analysis, V.C. and A.V.; investigation, A.V. and V.C.; resources, V.C. and A.V.;
data curation, A.V. and V.C.; writing—original draft preparation, A.V. and V.C.; writing—review and editing,
A.N., E.H. and V.C.; visualization, V.C.; supervision, A.N. and E.H.; project administration, A.N. and V.C.; funding
acquisition, E.H. and A.N. All authors have read and agreed to the published version of the manuscript.
Funding:
This work is supported by BITS Additional competitive Research Grant funding under Project Grant
File no. PLN/AD/2018-19/6 for the Project titled “Brain Computer Interface Controlled Humanoid”.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
Mantri, S.; Dukare, V.; Yeole, S.; Patil, D.; Wadhai, V.M. A Survey: Fundamental of EEG. Int. J. Adv. Res.
Comput. Sci. Manag. Stud. 2013, Volume 1, Issue 4, 1-7.
2.
Pfurtscheller, G.; Neuper, C.; Guger, C.; Harkam, W.; Ramoser, H.; Schlögl, A.; Obermaier, B.; Pregenzer, M.
Current trends in Graz Brain-Computer Interface (BCI) research. IEEE Trans. Rehabil. Eng. 2000, 8, 216–219,
doi:10.1109/86.847821.
3.
Nicolas-Alonso, L.F.; Gomez-Gil, J. Brain Computer Interfaces, a Review. Sensors
2012
,12, 1211–1279,
doi:10.3390/s120201211.
4.
Hirai, K.; Hirose, M.; Haikawa, Y.; Takenaka, T. The development of Honda humanoid robot. In Proceedings
of the 1998 IEEE International Conference on Robotics and Automation (Cat. No.98CH36146), Leuven,
Belgium, 20–20 May 1998; IEEE: Piscataway, NJ, USA, 1998; Volume 2, 1321–1326.
5.
Brooks, R.; Breazeal, C.; Marjanovi´c, M.; Scassellati, B.; Williamson, M.M. The Cog Project: Building a
Humanoid Robot. In Computer Vision; Springer: Berlin/Heidelberg, Germany, 1999; Volume 1562, pp. 52–87.
6.
George, M.; Tardif, J.-P.; Kelly, A. Visual and inertial odometry for a disaster recovery humanoid. In Field and
Service Robotics; Springer: Cham, Switzerland, 2015; pp. 501–514.
7.
Kakiuchi, Y.; Kojima, K.; Kuroiwa, E.; Noda, S.; Murooka, M.; Kumagai, I.; Ueda, R.; Sugai, F.; Nozawa, S.;
Okada, K.; Inaba, M. Development of humanoid robot system for disaster response through team nedo-jsk’s
approach to darpa robotics challenge finals. In Proceedings of the 2015 IEEE-RAS 15th International
Conference on Humanoid Robots (Humanoids), Seoul, Korea, 3–5 November 2015; IEEE: Piscataway, NJ,
USA, 2015; pp. 805–810.
8.
Vukobratovi´c, M. Humanoid robotics, past, present state, future. Director Robotics Center. Mihailo Pupin
Inst. 2006,11000, 13–27.
9.
Vukobratovi´c, M. Active exoskeletal systems and beginning of the development of humanoid robotics.
Facta Univ.-Ser. Mech. Autom. Control. Robot. 2008,7, 243–262.
Sensors 2020,20, 3620 19 of 23
10.
Shajahan, J.A.; Jain, S.; Joseph, C.; Keerthipriya, G.; Raja, P.K. Target detecting defence humanoid sniper.
In Proceedings of the 2012 Third International Conference on Computing, Communication and Networking
Technologies (ICCCNT’12), Coimbatore, India, 26 July 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1–6.
11.
Alladi, T.; Chamola, V.; Sikdar, B.; Choo, K.K. Consumer iot: Security vulnerability case studies and solutions.
IEEE Consum. Electron. Mag. 2020,9, 17–25.
12.
Hassija, V.; Chamola, V.; Saxena, V.; Jain, D.; Goyal, P.; Sikdar, B. A Survey on IoT Security:
Application Areas, Security Threats, and Solution Architectures. IEEE Access
2019
,7, 82721–82743,
doi:10.1109/access.2019.2924045.
13.
Alladi, T.; Chamola, V.; Zeadally, S. Industrial Control Systems: Cyberattack trends and countermeasures.
Comput. Commun. 2020,155, 1–8, doi:10.1016/j.comcom.2020.03.007.
14.
Luo, R.C.; Chang, C.-C. Multisensor Fusion and Integration: A Review on Approaches and Its Applications
in Mechatronics. IEEE Trans. Ind. Inf. 2011,8, 49–60, doi:10.1109/TII.2011.2173942.
15.
Novak, D.; Riener, R. A survey of sensor fusion methods in wearable robotics. Robot. Auton. Syst.
2015
,73,
155–170, doi:10.1016/j.robot.2014.08.012.
16.
Wolpaw, J.R.; Birbaumer, N.; Heetderks, W.; McFarland, D.; Peckham, P.; Schalk, G.; Donchin, E.; Quatrano, L.;
Robinson, C.; Vaughan, T. Brain-computer interface technology: a review of the first international meeting.
IEEE Trans. Rehabil. Eng. 2000,8, 164–173, doi:10.1109/tre.2000.847807.
17.
Fabiani, G.; McFarland, D.; Wolpaw, J.R.; Pfurtscheller, G. Conversion of EEG Activity Into Cursor
Movement by a Brain–Computer Interface (BCI). IEEE Trans. Neural Syst. Rehabil. Eng.
2004
,12, 331–338,
doi:10.1109/tnsre.2004.834627.
18.
Minguillon, J.; Lopez-Gordo, M.A.; Pelayo, F. Trends in EEG-BCI for daily-life: Requirements for artifact
removal. Biomed. Signal Process. Control. 2017,31, 407–418, doi:10.1016/j.bspc.2016.09.005.
19.
Abdulkader, S.N.; Atia, A.; Mostafa, M.-S. Brain computer interfacing: Applications and challenges.
Egypt. Inf. J. 2015,16, 213–230, doi:10.1016/j.eij.2015.06.002.
20.
Gao, X.; Xu, D.; Cheng, M.; Gao, S. A bci-based environmental controller for the motion-disabled. IEEE Trans.
Neural Syst. Rehabil. Eng. 2003,11, 137–140, doi:10.1109/tnsre.2003.814449.
21.
Rebsamen, B.; Burdet, E.; Guan, C.; Zhang, H.; Teo, C.L.; Zeng, Q.; Laugier, C.; Ang, M. Controlling a
Wheelchair Indoors Using Thought. IEEE Intell. Syst. 2007,22, 18–24, doi:10.1109/MIS.2007.26.
22.
Reuderink B. Games and Brain-Computer Interfaces: The State of the Art; WP2 BrainGain Deliverable, HMI,
University of Twente (September 2008); Netherlands, 1-11. .
23.
Finke, A.; Lenhardt, A.; Ritter, H. The MindGame: A P300-based brain–computer interface game.
Neural Networks 2009,22, 1329–1333, doi:10.1016/j.neunet.2009.07.003.
24.
Li, W.; Jaramillo, C.; Li, Y. Development of mind control system for humanoid robot through a brain
computer interface. In Proceedings of the 2012 Second International Conference on Intelligent System Design
and Engineering Application, Sanya, Hainan, China, 6–7 January 2012; IEEE: Piscataway, NJ, USA, 2012;
pp. 679–682).
25.
Millán, J.D.; Rupp, R.; Müller-Putz, G.; Murray-Smith, R.; Giugliemma, C.; Tangermann, M.; Vidaurre, C.;
Cincotti, F.; Kubler, A.; Leeb, R.; et al. Combining brain–computer interfaces and assistive technologies:
state-of-the-art and challenges. Front. Mol. Neurosci. 2010,4, 161.
26.
Cortes, A.M.; Manyakov, N.V.; Chumerin, N.; Van Hulle, M.M. Language Model Applications to Spelling
with Brain-Computer Interfaces. Sensors 2014,14, 5967–5993, doi:10.3390/s140405967.
27.
Gomez-Gil, J.; San-Jose-Gonzalez, I.; Nicolas-Alonso, L.F.; Alonso-Garcia, S. Steering a Tractor by Means of
an EMG-Based Human-Machine Interface. Sensors 2011,11, 7110–7126, doi:10.3390/s110707110.
28.
Wang, F.; Zhang, X.; Fu, R.; Sun, G. Study of the Home-Auxiliary Robot Based on BCI. Sensors
2018
,18, 1779,
doi:10.3390/s18061779.
29.
Ahn, M.; Lee, M.; Choi, J.; Jun, S.C. A Review of Brain-Computer Interface Games and an Opinion Survey
from Researchers, Developers and Users. Sensors 2014,14, 14601–14633, doi:10.3390/s140814601.
30.
Sung, Y.; Cho, K.; Um, K. A Development Architecture for Serious Games Using BCI (Brain Computer
Interface) Sensors. Sensors 2012,12, 15671–15688, doi:10.3390/s121115671.
31.
Schalk, G.; McFarland, D.; Hinterberger, T.; Birbaumer, N.; Wolpaw, J.R. BCI2000: A General-Purpose
Brain-Computer Interface (BCI) System. IEEE Trans. Biomed. Eng.
2004
,51, 1034–1043,
doi:10.1109/tbme.2004.827072.
Sensors 2020,20, 3620 20 of 23
32.
Chae, Y.; Jeong, J.; Jo, S. Toward Brain-Actuated Humanoid Robots: Asynchronous Direct Control Using an
EEG-Based BCI. IEEE Trans. Robot. 2012,28, 1131–1144, doi:10.1109/TRO.2012.2201310.
33.
Güneysu, A.; Akin, H.L. An SSVEP based BCI to control a humanoid robot by using portable EEG device.
In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and
Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 6905–6908.
34.
Zander, T.O.; Kothe, C.; Jatzev, S.; Gaertner, M. Enhancing Human-Computer Interaction with Input from
Active and Passive Brain-Computer Interfaces. In Evaluating User Experience in Games; Springer: London,
UK, 2010; pp. 181–199.
35.
Shenoy, P.; Krauledat, M.; Blankertz, B.; Rao, R.P.N.; Müller, K.-R. Towards adaptive classification for BCI.
J. Neural Eng. 2006,3, R13–R23, doi:10.1088/1741-2560/3/1/r02.
36.
Lee, M.-H.; Fazli, S.; Mehnert, J.; Lee, S.-W. Subject-dependent classification for robust idle state detection
using multi-modal neuroimaging and data-fusion techniques in BCI. Pattern Recognit.
2015
,48, 2725–2737,
doi:10.1016/j.patcog.2015.03.010.
37.
Bansal, G.; Chamola, V.; Narang, P.; Kumar, S.; Raman, S. Deep3DSCan: Deep residual network and
morphological descriptor based framework for lung cancer classification and 3D segmentation. IET Image
Process. 2020,14, 1240–1247, doi:10.1049/iet-ipr.2019.1164.
38.
Chamola, V.; Hassija, V.; Gupta, V.; Guizani, M. A Comprehensive Review of the COVID-19 Pandemic and
the Role of IoT, Drones, AI, Blockchain, and 5G in Managing Its Impact. IEEE Access 2020,8, 90225–90265.
39.
Hassija, V.; Gupta, V.; Garg, S.; Chamola, V. Traffic Jam Probability Estimation Based on Blockchain and
Deep Neural Networks. IEEE Trans. Intell. Transp. Syst. 2020, 1–10, doi:10.1109/tits.2020.2988040.
40.
Hong, K.-S.; Khan, M.J. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy
and Increased Number of Commands: A Review. Front. Neurorobot.
2017
,11, doi:10.3389/fnbot.2017.00035.
41.
Choi, B.; Jo, S. A Low-Cost EEG System-Based Hybrid Brain-Computer Interface for Humanoid Robot
Navigation and Recognition. PLoS ONE 2013,8, e74583, doi:10.1371/journal.pone.0074583.
42.
Fazli, S.; Dähne, S.; Samek, W.; Bieszmann, F.; Müller, K.-R.; Biebmann, F. Learning From More Than
One Data Source: Data Fusion Techniques for Sensorimotor Rhythm-Based Brain—Computer Interfaces.
In Proceedings of the Proceedings of the IEEE; IEEE: Piscataway, NJ, USA, 2015; Volume 103, pp. 891–906.
43.
Pfurtscheller, G.; Allison, B.Z.; Brunner, C.; Bauernfeind, G.; Escalante, T.S.; Scherer, R.; Zander, T.O.;
Mueller-Putz, G.; Neuper, C.; Birbaumer, N. The Hybrid BCI. Front. Mol. Neurosci.
2010
,4,
doi:10.3389/fnpro.2010.00003.
44.
Aswath, S.; Tilak, C.K.; Suresh, A.; Udupa, G. Human Gesture Recognition for Real-Time Control of
Humanoid Robot. Int’l Journal of Advances in Mechanical & Automobile Engg., India, Volume: 1, Issue 1,
1-5,
45.
Yun, S.-J.; Lee, M.-C.; Cho, S.-B. P300 BCI based planning behavior selection network for humanoid robot
control. In Proceedings of the 2013 Ninth International Conference on Natural Computation (ICNC),
Shenyang, China, 23–25 July 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 354–358.
46.
Horki, P.; Solis-Escalante, T.; Neuper, C.; Müller-Putz, G.R. Combined motor imagery and SSVEP based BCI
control of a 2 DoF artificial upper limb. Med Boil. Eng. 2011,49, 567–577, doi:10.1007/s11517-011-0750-2.
47.
Ramadan, R.A.; Vasilakos, A.V. Brain computer interface: control signals review. Neurocomputing
2017
,223,
26–44, doi:10.1016/j.neucom.2016.10.024.
48.
Guger, C.; Daban, S.; Sellers, E.; Holzner, C.; Krausz, G.; Carabalona, R.; Gramatica, F.; Edlinger, G. How many
people are able to control a P300-based brain–computer interface (BCI)? Neurosci. Lett.
2009
,462, 94–98,
doi:10.1016/j.neulet.2009.06.045.
49.
Mellinger, J.; Schalk, G.; Braun, C.; Preissl, H.; Rosenstiel, W.; Birbaumer, N.; Kübler, A. An MEG-based
brain–computer interface (BCI). NeuroImage 2007,36, 581–93, doi:10.1016/j.neuroimage.2007.03.019.
50.
Müller-Putz, G.; Scherer, R.; Brunner, C.; Leeb, R.; Pfurtscheller, G. Better than random: a closer look on BCI
results. Int. J. Bioelectromagn. 2008,10, 52–55.
51.
Ebenuwa, S.H.; Sharif, M.S.; Alazab, M.; Al-Nemrat, A. Variance Ranking Attributes Selection
Techniques for Binary Classification Problem in Imbalance Data. IEEE Access
2019
,7, 24649–24666,
doi:10.1109/access.2019.2899578.
52.
Lotte, F.; Congedo, M.; Lecuyer, A.; Lamarche, F.; Arnaldi, B. A review of classification algorithms for
EEG-based brain–computer interfaces. J. Neural Eng. 2007,4, R1–R13, doi:10.1088/1741-2560/4/2/r01.
Sensors 2020,20, 3620 21 of 23
53.
Müller, K.R.; Krauledat, M.; Dornhege, G.; Curio, G.; Blankertz, B. Machine learning techniques for
brain-computer interfaces. Biomed. Tech. 2004,49, 11–22.
54.
Müller, K.-R.; Tangermann, M.; Dornhege, G.; Krauledat, M.; Curio, G.; Blankertz, B. Machine learning for
real-time single-trial EEG-analysis: From brain–computer interfacing to mental state monitoring. J. Neurosci.
Methods 2008,167, 82–90, doi:10.1016/j.jneumeth.2007.09.022.
55.
Krusienski, D.J.; Sellers, E.W.; Cabestaing, F.; Bayoudh, S.; McFarland, D.; Vaughan, T.M.; Wolpaw,
J.R. A comparison of classification techniques for the P300 Speller. J. Neural Eng.
2006
,3, 299–305,
doi:10.1088/1741-2560/3/4/007.
56.
Bi, L.; Fan, X.-A.; Liu, Y. EEG-Based Brain-Controlled Mobile Robots: A Survey. IEEE Trans. Human-Machine
Syst. 2013,43, 161–176, doi:10.1109/tsmcc.2012.2219046.
57.
Subasi, A.; Gursoy, M.I. EEG signal classification using PCA, ICA, LDA and support vector machines.
Expert Syst. Appl. 2010,37, 8659–8666, doi:10.1016/j.eswa.2010.06.065.
58.
Millan, J.D.R.; Mouriño, J. Asynchronous bci and local neural classifiers: an overview of the adaptive brain
interface project. IEEE Trans. Neural Syst. Rehabil. Eng. 2003,11, 159–161, doi:10.1109/tnsre.2003.814435.
59.
Sturm, I.; Lapuschkin, S.; Samek, W.; Müller, K.-R. Interpretable deep neural networks for single-trial EEG
classification. J. Neurosci. Methods 2016,274, 141–145, doi:10.1016/j.jneumeth.2016.10.008.
60.
Kaper, M.; Meinicke, P.; Grossekathoefer, U.; Lingner, T.; Ritter, H. BCI Competition 2003—Data Set IIb:
Support Vector Machines for the P300 Speller Paradigm. IEEE Trans. Biomed. Eng.
2004
,51, 1073–1076,
doi:10.1109/tbme.2004.826698.
61.
Kawanabe, M.; Krauledat, M.; Blankertz, B. A Bayesian Approach for Adaptive BCI Classification; In Proceedings
of 3rd International Brain-Computer Interface Workshop and Training Course, Austria, 2006, 1-2.
62.
Gouaillier, D.; Hugel, V.; Blazevic, P.; Kilner, C.; Monceaux, J.; Lafourcade, P.; Marnier, B.; Serre, J.; Maisonnier,
B. The nao humanoid: a combination of performance and affordability. arXiv 2008, arXiv:0807.3223.
63.
Kaneko, K.; Kanehiro, F.; Kajita, S.; Hirukawa, H.; Kawasaki, T.; Hirata, M.; Akachi, K.; Isozumi, T.T.
Humanoid robot HRP-2. 2004. In Proceedings of ICRA 2004 IEEE International Conference on Robotics and
Automation 2004, New Orleans, LA, USA, 26 April–1 May 2004; Volume 2, pp. 1083–1090.
64.
Ha, I.; Tamura, Y.; Asama, H.; Han, J.; Hong, D.W. September. Development of open humanoid platform
DARwIn-OP. In Proceedings of the SICE Annual Conference 2011, Tokyo, Japan, 13–18 September 2011;
IEEE: Piscataway, NJ, USA, 2011; pp. 2178–2181.
65.
Wirth, C.; Toth, J.; Arvaneh, M. “You Have Reached Your Destination”: A Single Trial EEG Classification
Study. Front. Mol. Neurosci. 2020,14, 66, doi:10.3389/fnins.2020.00066.
66.
Spataro, R.; Chella, A.; Allison, B.; Giardina, M.; Sorbello, R.; Tramonte, S.; Guger, C.; La Bella, V.
Reaching and grasping a glass of water by locked-in ALS patients through a BCI-controlled humanoid robot.
Front. Hum. Neurosci. 2017,11, 68.
67.
Farwell, L.; Donchin, E. Talking off the top of your head: toward a mental prosthesis utilizing event-related
brain potentials. Electroencephalogr. Clin. Neurophysiol.
1988
,70, 510–523, doi:10.1016/0013-4694(88)90149-6.
68.
Saduanov, B.; Alizadeh, T.; An, J.; Abibullaev, B. Trained by demonstration humanoid robot controlled via a
BCI system for telepresence. In Proceedings of the 2018 6th International Conference on Brain-Computer
Interface (BCI), GangWon, Korea, 15–17 January 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–4.
69.
Chella, A.; Pagello, E.; Menegatti, E.; Sorbello, R.; Anzalone, S.M.; Cinquegrani, F.; Tonin, L.; Piccione,
F.; Prifitis, K.; Blanda, C.; et al. A BCI Teleoperated Museum Robotic Guide. In Proceedings of the 2009
International Conference on Complex, Intelligent and Software Intensive Systems, Fukuoka, Japan, 16–19
March 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 783–788.
70.
Sorbello, R.; Tramonte, S.; Giardina, M.E.; La Bella, V.; Spataro, R.; Allison, B.Z.; Guger, C.; Chella, A.
A Human–Humanoid Interaction Through the Use of BCI for Locked-In ALS Patients Using Neuro-Biological
Feedback Fusion. IEEE Trans. Neural Syst. Rehabil. Eng. 2017,26, 487–497, doi:10.1109/tnsre.2017.2728140.
71.
Alimardani, M.; Nishio, S.; Ishiguro, H. The Importance of Visual Feedback Design in BCIs; From
Embodiment to Motor Imagery Learning. PLoS ONE
2016
,11, e0161945, doi:10.1371/journal.pone.0161945.
72.
Tidoni, E.; Gergondet, P.; Kheddar, A.; Aglioti, S.M. Audio-visual feedback improves the BCI performance in
the navigational control of a humanoid robot. Front. Neurorobot. 2014,8, doi:10.3389/fnbot.2014.00020.
73.
Nam, Y.; Koo, B.; Cichocki, A.; Choi, S. GOM-Face: GKP, EOG, and EMG-Based Multimodal Interface
With Application to Humanoid Robot Control. IEEE Trans. Biomed. Eng.
2014
,61, 453–462,
doi:10.1109/tbme.2013.2280900.
Sensors 2020,20, 3620 22 of 23
74.
Zhang, H.; Jolfaei, A.; Alazab, M. A Face Emotion Recognition Method Using Convolutional Neural Network
and Image Edge Computing. IEEE Access 2019,7, 159081–159089, doi:10.1109/access.2019.2949741.
75.
Petit, D.; Gergondet, P.; Cherubini, A.; Meilland, M.; Comport, A.I.; Kheddar, A. Navigation assistance for a
BCI-controlled humanoid robot. In Proceedings of the 4th Annual IEEE International Conference on Cyber
Technology in Automation, Control and Intelligent, Hong Kong, China, 4–7 June 2014; IEEE: Piscataway, NJ,
USA, 2014; pp. 246–251.
76.
Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: part I. IEEE Robot. Autom. Mag.
2006
,
13, 99–110, doi:10.1109/MRA.2006.1638022.
77.
Gergondet, P.; Kheddar, A.; Hintermüller, C.; Guger, C.; Slater, M. Multitask Humanoid Control with
a Brain-Computer Interface: User Experiment with HRP-2. In Experimental Robotics; Springer: Berlin,
Germany, 2012.
78.
Weisz, J.; Elvezio, C.; Allen, P.K. A user interface for assistive grasping. In Proceedings of the 2013 IEEE/RSJ
International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; IEEE:
Piscataway, NJ, USA, 2013; pp. 3216–3221.
79.
Ça˘glayan, O.; Arslan, R.B. Humanoid robot control with SSVEP on embedded system. In Proceedings of the
5th International Brain-Computer Interface Meeting: Defining the Future; 2013;, Taylor & Francis Conference,
California, USA, pp. 260–261.
80.
Hochberg, L.R.; Bacher, D.; Jarosiewicz, B.; Masse, N.Y.; Simeral, J.D.; Vogel, J.; Haddadin, S.; Liu, J.;
Cash, S.S.; Van Der Smagt, P.; et al. Reach and grasp by people with tetraplegia using a neurally controlled
robotic arm. Nature 2012,485, 372–375, doi:10.1038/nature11076.
81.
Escolano, C.; Antelis, J.M.; Minguez, J. A Telepresence Mobile Robot Controlled With a Noninvasive
Brain–Computer Interface. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics)
2011
,42, 793–804,
doi:10.1109/TSMCB.2011.2177968.
82.
Zhao, J.; Li, W.; Mao, X.; Hu, H.; Niu, L.; Chen, G. Behavior-Based SSVEP Hierarchical Architecture for
Telepresence Control of Humanoid Robot to Achieve Full-Body Movement. IEEE Trans. Cogn. Dev. Syst.
2017,9, 197–209, doi:10.1109/tcds.2016.2541162.
83.
Beraldo, G.; Antonello, M.; Cimolato, A.; Menegatti, E.; Tonin, L. Brain-Computer Interface Meets
ROS: A Robotic Approach to Mentally Drive Telepresence Robots. In Proceedings of the 2018 IEEE
International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018;
IEEE: Piscataway, NJ, USA, 2018; pp. 1–6.
84.
Aznan, N.K.N.; Connolly, J.D.; Al Moubayed, N.; Breckon, T.P. Using Variable Natural Environment
Brain-Computer Interface Stimuli for Real-time Humanoid Robot Navigation. In Proceedings of the 2019
International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019;
IEEE: Piscataway, NJ, USA, 2019; pp. 4889–4895.
85.
Zhao, J.; Li, W.; Li, M. Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of
Humanoid Robots. PLoS ONE 2015,10, e0142168, doi:10.1371/journal.pone.0142168.
86.
Thobbi, A.; Kadam, R.; Sheng, W. Achieving remote presence using a humanoid robot controlled by a
non-invasive BCI device. Int. J. Artif. Intell. Mach. Learn. 2010,10, 41–45.
87.
Leeb, R.; Tonin, L.; Rohm, M.; Desideri, L.; Carlson, T.; Millan, J.D.R. Towards Independence: A BCI
Telepresence Robot for People With Severe Motor Disabilities. Proc. IEEE 2015,103, 969–982.
88.
Escolano, C.; Antelis, J.; Mínguez, J. Human brain-teleoperated robot between remote places. In Proceedings
of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009;
IEEE: Piscataway, NJ, USA, 2009; pp. 4430–4437.
89.
Stawicki, P.; Gembler, F.; Volosyak, I. Driving a Semiautonomous Mobile Robotic Car Controlled by an
SSVEP-Based BCI. Comput. Intell. Neurosci. 2016,2016, 1–14, doi:10.1155/2016/4909685.
90.
Ma, J.; Zhang, Y.; Cichocki, A.; Matsuno, F. A Novel EOG/EEG Hybrid Human–Machine Interface Adopting
Eye Movements and ERPs: Application to Robot Control. IEEE Trans. Biomed. Eng.
2015
,62, 876–889,
doi:10.1109/tbme.2014.2369483.
91.
Kim, B.H.; Kim, M.; Jo, S. Quadcopter flight control using a low-cost hybrid interface with EEG-based
classification and eye tracking. Comput. Boil. Med. 2014,51, 82–92, doi:10.1016/j.compbiomed.2014.04.020.
92.
Stawicki, P.; Gembler, F.; Rezeika, A.; Volosyak, I. A Novel Hybrid Mental Spelling Application Based on
Eye Tracking and SSVEP-Based BCI. Brain Sci. 2017,7, 35, doi:10.3390/brainsci7040035.
Sensors 2020,20, 3620 23 of 23
93.
Dong, X.; Wang, H.; Chen, Z.; Shi, B.E.; B.E., S. Hybrid Brain Computer Interface via Bayesian integration
of EEG and eye gaze. In Proceedings of the 2015 7th International IEEE/EMBS Conference on Neural
Engineering (NER), Montpellier, France, 22–24 April 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 150–153.
94.
Nam, Y.; Zhao, Q.; Cichocki, A.; Choi, S. Tongue-Rudder: A Glossokinetic-Potential-Based Tongue–Machine
Interface. IEEE Trans. Biomed. Eng. 2011,59, 290–299, doi:10.1109/TBME.2011.2174058.
95.
Navarro, R.B.; Boquete, L.; Mazo, M.; Lopez, E.; L., B.; M., M.; Elena, L. System for assisted mobility using
eye movements based on electrooculography. IEEE Trans. Neural Syst. Rehabil. Eng.
2002
,10, 209–218,
doi:10.1109/tnsre.2002.806829.
96.
Tsui, C.S.L.; Jia, P.; Gan, J.Q.; Hu, H.; Yuan, K. EMG-based hands-free wheelchair control with EOG attention
shift detection. In Proceedings of the 2007 IEEE International Conference on Robotics and Biomimetics
(ROBIO), Sanya, China, 15–18 December 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 1266–1271.
97.
Usakli, A.B.; Gürkan, S.; Aloise, F.; Vecchiato, G.; Babiloni, F. A hybrid platform based on EOG and
EEG signals to restore communication for patients afflicted with progressive motor neuron diseases.
In Proceedings of the 2009 Annual International Conference of the IEEE Engineering in Medicine and
Biology Society, Minneapolis, MN, USA, 3–6 September 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 543–546.
98.
Postelnicu, C.-C.; Girbacia, F.; Talaba, D. EOG-based visual navigation interface development. Expert Syst.
Appl. 2012,39, 10857–10866, doi:10.1016/j.eswa.2012.03.007.
99.
Ramli, R.; Arof, H.; Ibrahim, F.; Mokhtar, N.; Idris, M.Y.I. Using finite state machine and a hybrid of EEG
signal and EOG artifacts for an asynchronous wheelchair navigation. Expert Syst. Appl.
2015
,42, 2451–2463,
doi:10.1016/j.eswa.2014.10.052.
100.
Martens, N.; Jenke, R.; Abu-Alqumsan, M.; Kapeller, C.; Hintermüller, C.; Guger, C.; Peer, A.; Buss, M.
Towards robotic re-embodiment using a Brain-and-Body-Computer Interface. In Proceedings of the 2012
IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Portugal, 7–12 October
2012; IEEE: Piscataway, NJ, USA, 2012; pp. 5131–5132.
101.
Acar, D.; Miman, M.; Akirmak, O.O. Treatment of anxiety disorders patients through eeg and augmented
reality. Eur. Soc. Sci. Res. J. 2014,3, 18–27.
102.
Lenhardt, A.; Ritter, H. An Augmented-Reality Based Brain-Computer Interface for Robot Control.
In International Conference on Neural Information Processing; Springer: Berlin/Heidelberg, Germany, 2010;
pp. 58–65.
103.
Takano, K.; Hata, N.; Kansaku, K. Towards Intelligent Environments: An Augmented Reality–Brain–Machine
Interface Operated with a See-Through Head-Mount Display. Front. Mol. Neurosci.
2011
,5,
doi:10.3389/fnins.2011.00060.
104.
Faller, J.; Allison, B.Z.; Brunner, C.; Scherer, R.; Schmalstieg, D.; Pfurtscheller, G.; Neuper, C. A feasibility
study on SSVEP-based interaction with motivating and immersive virtual and augmented reality. arXiv
2017
,
arXiv:1701.03981.
105.
Faller, J.; Leeb, R.; Pfurtscheller, G.; Scherer, R. Avatar navigation in virtual and augmented reality
environments using an ssvep bci icabb-2010. In Proceedings of the Brain-Computer Interfacing and Virtual
Reality Workshop W; 2010; Venice, Italy. Volume 1.
106.
Kerous, B.; Liarokapis, F. BrainChat—A Collaborative Augmented Reality Brain Interface for Message
Communication. In Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented
Reality (ISMAR-Adjunct), Nantes, France, 9–13 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 279–283.
c
2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
... Neurofeedback (NF) is an experimental technique for closed-loop brain training for animals and human participants to self-manipulate their brain signals [1][2][3][4][5][6][7]. NF provides an explicit sensory indicator of a neurophysiological process to enable individuals to modulate activation levels in specific ways to observe its effect on brain function and behaviour. ...
... NF has been recognized as a powerful approach to understanding brain-behaviour relationships and has shown potential in basic neuroscientific investigations and clinical rehabilitation to restore and enhance brain function and alleviate cognitive deficits in patient populations [6][7][8][9][10]. Methodological advances and experimental successes of NF led to the rapid development of the braincomputer interfaces (BCIs) and brain-machine interfaces (BMIs), which are methods for enabling humans or animals to directly control external devices without the involvement of peripheral limbs through the learned control of specific features of neural activity [1][2][3]. The name BCI is generally attributed to non-invasive approaches, based on electroencephalography (EEG), functional magnetic resonance imaging (fMRI) or functional near-infrared spectroscopy (fNIRS). ...
Article
Full-text available
While neurofeedback represents a promising tool for neuroscience and a brain self-regulation approach to psychological rehabilitation, the field faces several problems and challenges. Current research has shown great variability and even failure among human participants in learning to self-regulate target features of brain activity with neurofeedback. A better understanding of cognitive mechanisms, psychological factors and neural substrates underlying self-regulation might help improve neurofeedback’s scientific and clinical practices. This article reviews the current understanding of the neural mechanisms of brain self-regulation by drawing on findings from human and animal studies in neurofeedback, brain–computer/machine interfaces and neuroprosthetics. In this article, we look closer at the following topics: cognitive processes and psychophysiological factors affecting self-regulation, theoretical models and neural substrates underlying self-regulation, and finally, we provide an outlook on the outstanding gaps in knowledge and technical challenges. This article is part of the theme issue ‘Neurofeedback: new territories and neurocognitive mechanisms of endogenous neuromodulation’.
... For the future research, we also plan to implement the BCI system as an online tracker (see Figure 6) of the neuro-cognitive markers of trust and trustworthiness in a closed human-humanoid/tele-operator loop, with an additional AI module. Among the AI alternatives for classifiers, we choose to employ a support vector machine(SVM) classifier [62][63][64]. We choose an SVM to classify ERP waveforms and action classes because of its relative flexibility that is provided by a nonlinear classifier and higher scalability to particular classification problems. ...
Chapter
Full-text available
Trust is a fundamental aspect of human social interaction. With the advancement of technologies, such as brain-computer interface (BCI) systems and humanoids, arises the need for investigating human-humanoid interaction (HHI). A model to interpret BCI data in correlation to cognitive components of trust during this HHI is proposed. This will be presented by first introducing the scope of the trust in social behavior and its role as a cognitive tool for social competence. Second, a cognitive model of trust is presented with an experimental paradigm to test both general and HHI components accordingly. Then an evaluation of P300 and N400 event-related potential (ERP) signals as candidates for the neuro-cognitive markers will be performed. In particular, they will be evaluated for updating one’s cognitive map and detecting a semantic violation during HHI. Finally, there will be a discussion of ERP measurement limitations as well as the prospects of incorporating electroencephalogram (EEG) oscillation (alpha, gamma, and phi) into account within the BCI system design.
... Advancements in prosthetics have allowed for the brain to control external applications through brain-computer interface (BCI) devices, BCIs acquire signals from the brain and then these signals are processed and analysed to be able to be translated into commands that can be used to control external sources or carry out a specific action such as hand grasp actions [1], [2]. BCI devices are used in a multitude of different applications such as prosthetics and humanoid robotics [3], [4]. A study by Zhang expresses the difficulties of using BCI devices and when implemented in specific applications such as soft robotics; BCI devices have difficulty to deliver commands that robotic applications require in multitask control scenarios [5]. ...
Conference Paper
Full-text available
Prosthetic hands allow restoring functions back to an amputee who does not have full control over their muscles or joints. A Brain-Computer Interface (BCI) can increase the control accuracy of the individual's upper limb prosthetic. Two non-invasive methods are based on electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) pathways. These pathways can be integrated with surface electromyography (sEMG) to measure and obtain electrical activity produced by skeletal muscles. In this paper, an fNIRS-based approach is investigated to determine the suitability for prosthetic limb control as a substitute for EEG-based prostheses in real-world integrations with a discussion on the overcoming limitations of EEG. A systematic and comprehensive state-of-the-art review has been conducted from reputed databases like PubMed and Web of Science from January 2017 to March 2024 using relevant keywords. Both techniques were investigated in this study, including hybrid implementations between the technologies and sEMG. The results of this review show that vast research has been conducted around EEG-based paradigms and BCI, however fNIRS demonstrates more promise in research offering significant advantages over EEG in real-world applications. Moreover, hybrid control systems based on fNIRS/sEMG and EEG/sEMG increase the quality of the control procedure as sEMG alone cannot provide complete information to control complex prosthetics. Finally, it is identified that there are limited studies focusing on hybrid fNIRS/sEMG implementation thus highlighting a research gap in BCI studies.
... Electroencephalography (EEG) is widely utilized as a brain signal in the development of BCI-controlled robotic systems, primarily due to its non-invasive, high temporal resolution, and user-friendly characteristics (Gao et al. 2014;Chamola et al. 2020;Chen et al. 2020;Tonin and Millan 2021). For example, Bell et al. developed a navigation system for controlling a humanoid robot, which utilized a P300-BCI to instruct the robot to retrieve a desired object and transport it to a specific location (Bell et al. 2008). ...
Article
Full-text available
Brain-computer interface (BCI)-based robot combines BCI and robotics technology to realize the brain’s intention to control the robot, which not only opens up a new way for the daily care of the disabled individuals, but also provides a new way of communication for normal people. However, the existing systems still have shortcomings in many aspects such as friendliness of human–computer interaction, and interaction efficient. This study developed a humanoid robot control system by integrating an augmented reality (AR)-based BCI with a simultaneous localization and mapping (SLAM)-based scheme for autonomous indoor navigation. An 8-target steady-state visual evoked potential (SSVEP)-based BCI was implemented to enable direct control of the humanoid robot by the user. A Microsoft HoloLens was utilized to display visual stimuli for eliciting SSVEPs. Filter bank canonical correlation analysis (FBCCA), a training-free method, was used to detect SSVEPs in this study. By leveraging SLAM technology, the proposed system alleviates the need for frequent control commands transmission from the user, thereby effectively reducing their workload. Online results from 12 healthy subjects showed this developed BCI system was able to select a command out of eight potential targets with an average accuracy of 94.79%. The autonomous navigation subsystem enabled the humanoid robot to autonomously navigate to a destination chosen utilizing the proposed BCI. Furthermore, all participants successfully completed the experimental task using the developed system without any prior training. These findings illustrate the feasibility of the developed system and its potential to contribute novel insights into humanoid robots control strategies.
... Recent advances in BCIs have demonstrated the efficacy of translating recorded EEG signals into actions that represent users' intentions. Successful examples of BCIs include EEG-speller systems [4][5][6][7], wheelchair control [8,9], upper-and lower-limb prosthetics control [10][11][12], robot control [13,14], and brain-controlled games [15]. In addition, BCI has also been demonstrated to represent a novel human-computer interaction technology that is not limited only to people with disabilities [16][17][18]. ...
Article
Full-text available
Since their inception more than 50 years ago, Brain-Computer Interfaces (BCIs) have held promise to compensate for functions lost by people with disabilities through allowing direct communication between the brain and external devices. While research throughout the past decades has demonstrated the feasibility of BCI to act as a successful assistive technology, the widespread use of BCI outside the lab is still beyond reach. This can be attributed to a number of challenges that need to be addressed for BCI to be of practical use including limited data availability, limited temporal and spatial resolutions of brain signals recorded non-invasively and inter-subject variability. In addition, for a very long time, BCI development has been mainly confined to specific simple brain patterns, while developing other BCI applications relying on complex brain patterns has been proven infeasible. Generative Artificial Intelligence (GAI) has recently emerged as an artificial intelligence domain in which trained models can be used to generate new data with properties resembling that of available data. Given the enhancements observed in other domains that possess similar challenges to BCI development, GAI has been recently employed in a multitude of BCI development applications to generate synthetic brain activity; thereby, augmenting the recorded brain activity. Here, a brief review of the recent adoption of GAI techniques to overcome the aforementioned BCI challenges is provided demonstrating the enhancements achieved using GAI techniques in augmenting limited EEG data, enhancing the spatiotemporal resolution of recorded EEG data, enhancing cross-subject performance of BCI systems and implementing end-to-end BCI applications. GAI could represent the means by which BCI would be transformed into a prevalent assistive technology, thereby improving the quality of life of people with disabilities, and helping in adopting BCI as an emerging human-computer interaction technology for general use.
... The closer the feedback is to the event the more likely change is to occur (Kumar et al., 2019). The use of fNIRS and other forms of non-invasive measurement devices such as in combination with machine learning classification tools such as artificial neural networks (ANNs) can provide real-time data on a client's cognitive state and their level of skill development within milliseconds (Chamola et al., 2020). This allows for semi-instantaneous / instantaneous adjustments of the DCE and rapid selection of specific mental health content for the client. ...
Article
Full-text available
Stroke remains a pervasive global health concern, exacting enduring motor, cognitive, and emotional tolls on affected individuals. The evolving landscape of neural interfaces has recently shown promising strides in reinstating lost sensorimotor functions. Non-invasive brain-computer interfaces (BCIs) have gained widespread recognition for their inherent advantages—simplicity, safety, and cost-effectiveness. The ongoing refinement of BCIs involves addressing previous technological and neurophysiological limitations, focusing on understanding distinctive neurophysiological alterations observed in individuals with disabilities. The exploration of brain connectivity attributes is pivotal in implementing BCI-based control systems. Emerging as a notable prospect for post-stroke motor rehabilitation, Brain-Computer Interface (BCI)-based therapy has made significant progress in restoring sensorimotor function. Revolutionary advancements have been made to traditional brain-computer interface (BCI) systems, which rely on brain signals obtained from electroencephalography (EEG) and rule-based translation algorithms. This comprehensive review delves into the realm of BCI Robotic Assisted Systems, offering a nuanced analysis of contributions from diverse researchers. It meticulously dissects these systems' advantages and limitations, providing valuable insights into their efficacy. Ongoing endeavors within the field prioritize enhancing the portability, simplicity, and cost-effectiveness of BCI technology, ultimately refining its overall usability. As research progresses, emphasizing larger sample sizes, there is a tangible potential to augment the reliability of BCI systems for stroke rehabilitation significantly. This trajectory holds promise for extending the benefits of BCI technology to a broader spectrum of patients, marking a transformative leap in neurorehabilitation methodologiest.
Article
Full-text available
The unprecedented outbreak of the 2019 novel coronavirus, termed as COVID-19 by the World Health Organization (WHO), has placed numerous governments around the world in a precarious position. The impact of the COVID-19 outbreak, earlier witnessed by the citizens of China alone, has now become a matter of grave concern for virtually every country in the world. The scarcity of resources to endure the COVID-19 outbreak combined with the fear of overburdened healthcare systems has forced a majority of these countries into a state of partial or complete lockdown. The number of laboratory-confirmed coronavirus cases has been increasing at an alarming rate throughout the world, with reportedly more than 3 million confirmed cases as of 30 April 2020. Adding to these woes, numerous false reports, misinformation, and unsolicited fears in regards to coronavirus, are being circulated regularly since the outbreak of the COVID-19. In response to such acts, we draw on various reliable sources to present a detailed review of all the major aspects associated with the COVID-19 pandemic. In addition to the direct health implications associated with the outbreak of COVID-19, this study highlights its impact on the global economy. In drawing things to a close, we explore the use of technologies such as the Internet of Things (IoT), Unmanned Aerial Vehicles (UAVs), blockchain, Artificial Intelligence (AI), and 5G, among others, to help mitigate the impact of COVID-19 outbreak.
Article
Full-text available
The exponential surge in the number of vehicles on the road has aggravated the traffic congestion problem across the globe. Several attempts have been made over the years to predict the traffic scenario accurately and consequently avoiding further congestion. Crowdsourcing has come forward as one of the most adopted methods for predicting traffic intensity using live data. However, the privacy concerns and the lack of motivation for the live users to help in the traffic prediction process have rendered existing crowdsourcing models inefficient. Towards this end, we present an advanced blockchain-based secure crowdsourcing model. Not only does our model ensure privacy preservation of the users, but by incorporating a revenue model, it also provides them with an incentive to participate in the traffic prediction process willingly. For accurate and efficient traffic jam probability estimation, our work proposes a neural network-based smart contract to be deployed onto the blockchain network. The results reveal that the proposed model is highly efficient in terms of attaining high participation and consequently obtaining highly accurate predictions.
Article
Full-text available
With the increasing incidence rate of lung cancer patients, early diagnosis could help in reducing the mortality rate. However, accurate recognition of cancerous lesions is immensely challenging owing to factors such as low contrast variation, heterogeneity, and visual similarity between benign and malignant nodules. Deep learning techniques have been very effective in performing natural image segmentation with robustness to previously unseen situations, reasonable scale invariance, and the ability to detect even minute differences. However, they usually fail to learn domain-specific features due to the limited amount of available data and domain agnostic nature of these techniques. Moreover, the interpretability limitations of the deep learning approaches hamper the capability of CAD in diagnostic assistance. This work presents an ensemble framework Deep3DSCan for lung cancer segmentation and classification. The deep 3D segmentation network generates the 3D volume of interest (VOI) from CT scans of patients. The deep features and handcrafted descriptors are extracted using a fine-tuned residual network (ResNet) and morphological techniques, respectively. Finally, the fused features are used for cancer classification. The experiments were conducted on the publicly available LUNA16 dataset. For the segmentation, We achieved an accuracy of 0.927, significant improvement over the Template Matching technique, which had achieved an accuracy of 0.927. For the detection, previous state-of-the-art is 0.866, while ours is 0.883.
Article
Full-text available
It is generally understood that an attacker with limited resources would not be able to carry out targeted attacks on Industrial Control Systems. Breaking this general notion, we present case studies of major attacks on Industrial Control Systems (ICSs) in the last 20 years. The attacks chosen are the most prominent ones in terms of the economic loss inflicted, the potential to damage physical equipment and to cause human casualties. For each of these attacks, we describe the attack methodology used and suggest possible solutions to prevent such attacks. We analyze each case study to provide a better insight into the development of future cybersecurity techniques for ICSs. Finally, we suggest some recommendations on the best practices for protecting ICSs.
Article
Full-text available
Studies have established that it is possible to differentiate between the brain's responses to observing correct and incorrect movements in navigation tasks. Furthermore, these classifications can be used as feedback for a learning-based BCI, to allow real or virtual robots to find quasi-optimal routes to a target. However, when navigating it is important not only to know we are moving in the right direction toward a target, but also to know when we have reached it. We asked participants to observe a virtual robot performing a 1-dimensional navigation task. We recorded EEG and then performed neurophysiological analysis on the responses to two classes of correct movements: those that moved closer to the target but did not reach it, and those that did reach the target. Further, we used a stepwise linear classifier on time-domain features to differentiate the classes on a single-trial basis. A second data set was also used to further test this single-trial classification. We found that the amplitude of the P300 was significantly greater in cases where the movement reached the target. Interestingly, we were able to classify the EEG signals evoked when observing the two classes of correct movements against each other with mean overall accuracy of 66.5 and 68.0% for the two data sets, with greater than chance levels of accuracy achieved for all participants. As a proof of concept, we have shown that it is possible to classify the EEG responses in observing these different correct movements against each other using single-trial EEG. This could be used as part of a learning-based BCI and opens a new door toward a more autonomous BCI navigation system.
Article
Full-text available
To avoid the complex process of explicit feature extraction in traditional facial expression recognition, a face expression recognition method based on a convolutional neural network (CNN) and an image edge detection is proposed. Firstly, the facial expression image is normalized, and the edge of each layer of the image is extracted in the convolution process. The extracted edge information is superimposed on each feature image to preserve the edge structure information of the texture image. Then, the dimensionality reduction of the extracted implicit features is processed by the maximum pooling method. Finally, the expression of the test sample image is classified and recognized by using a Softmax classifier. To verify the robustness of this method for facial expression recognition under a complex background, a simulation experiment is designed by scientifically mixing the Fer-2013 facial expression database with the LFW data set. The experimental results show that the proposed algorithm can achieve an average recognition rate of 88.56% with fewer iterations, and the training speed on the training set is about 1.5 times faster than that on the contrast algorithm.
Article
Full-text available
As consumer Internet of Things (IoT) devices become increasingly pervasive in our society, there is a need to understand the underpinning security risks. Therefore, in this paper, we describe the common attacks faced by consumer IoT devices and suggest potential mitigation strategies. We hope that the findings presented in this paper will inform the future design of IoT devices.
Article
Full-text available
Internet of things (IoT) is the next era of communication. Using IoT, physical objects can be empowered to create, receive and exchange data in a seamless manner. Various IoT applications focus on automating different tasks and are trying to empower the inanimate physical objects to act without any human intervention. The existing and upcoming IoT applications are highly promising to increase the level of comfort, efficiency, and automation for the users. To be able to implement such a world in an ever growing fashion requires high security, privacy, authentication, and recovery from attacks. In this regard, it is imperative to make the required changes in the architecture of IoT applications for achieving end-to-end secure IoT environments. In this paper, a detailed review of the security-related challenges and sources of threat in IoT applications is presented. After discussing the security issues, various emerging and existing technologies focused on achieving a high degree of trust in IoT applications are discussed. Four different technologies: Blockchain, fog computing, edge computing, and machine learning to increase the level of security in IoT are discussed.