Available via license: CC BY 4.0
Content may be subject to copyright.
sensors
Review
Brain-Computer Interface-Based Humanoid Control:
A Review
Vinay Chamola 1, Ankur Vineet 1, Anand Nayyar 2,3 and Eklas Hossain 4,*
1Department of Electrical and Electronics, Birla Institute of Technology & Science, Pilani 333031, India;
vinay.chamola@pilani.bits-pilani.ac.in (V.C.); h20180144@pilani.bits-pilani.ac.in (A.V.)
2Graduate School, Duy Tan University, Da Nang 550000, Vietnam; anandnayyar@duytan.edu.vn
3Faculty of Information Technology, Duy Tan University, Da Nang 550000, Vietnam
4
Department of Electrical Engineering and Renewable energy, Oregon Institute of Technology, Klamath Falls,
OR 97601, USA
*Correspondence: eklas.hossain@oit.edu; Tel.: +1-541-885-1516
Received: 25 April 2020; Accepted: 17 June 2020; Published: 27 June 2020
Abstract:
A Brain-Computer Interface (BCI) acts as a communication mechanism using brain
signals to control external devices. The generation of such signals is sometimes independent of
the nervous system, such as in Passive BCI. This is majorly beneficial for those who have severe
motor disabilities. Traditional BCI systems have been dependent only on brain signals recorded
using Electroencephalography (EEG) and have used a rule-based translation algorithm to generate
control commands. However, the recent use of multi-sensor data fusion and machine learning-based
translation algorithms has improved the accuracy of such systems. This paper discusses various BCI
applications such as tele-presence, grasping of objects, navigation, etc. that use multi-sensor fusion
and machine learning to control a humanoid robot to perform a desired task. The paper also includes
a review of the methods and system design used in the discussed applications.
Keywords:
brain-computer interface (BCI); data fusion; nao humanoid; electroencephalography
(EEG); P300; biological feedback
1. Introduction
Brain-Computer Interfaces (BCIs) lie at the intersection of signal processing, machine learning,
and robotics systems. Brain-Computer Interface is a technique that records and processes
the brain signals of a person to perform a desired actuation. Electroencephalography (EEG),
Electrocorticography (ECoG), and Near-Infrared Spectroscopy (NIRS) are a few methods used for
the recording brain signals. However, EEG is one of the most common methods used for BCI
applications [
1
,
2
]. BCI provides an opportunity to develop a new form of communication mechanism
controlled using brain signals. This kind of mechanism becomes extremely helpful for those with
motor impairment [
3
]. For example, applications such as brain-controlled limbs, brain-controlled
chairs, brain-controlled speech systems, etc. can be developed using a Brain-Computer Interface.
Combining this communication mechanism and interfacing with a humanoid robot opens up
several possibilities to replicate human actions. A humanoid robot [
4
,
5
] resembles the human body
in terms of the shape and range of actions it can perform. This makes the humanoid robot a perfect
candidate for receiving the actuation from the brain signals and then interacting with its environment
accordingly. Since a humanoid robot is almost a replica of a human being, it can be controlled
to perform various day-to-day tasks that a human being performs. Thus humanoids have great
potential with a large number of prospective day-to-day applications of which they can perform. Such
humanoids can especially serve as assistants for the disabled by helping them with their daily activities.
Sensors 2020,20, 3620; doi:10.3390/s20133620 www.mdpi.com/journal/sensors
Sensors 2020,20, 3620 2 of 23
Humanoid systems can also be used in mission-critical operations like disaster recovery [
6
,
7
], military
operations [
8
–
10
], etc. However, the reliability of the system required in such applications is much
more than that of previous applications. Security of such systems is also a major concern. Hence,
there has been growing research in this direction to secure such systems, thereby avoiding their being
hacked and misused [11–13].
While designing a BCI-controlled humanoid, the brain-control interface system requires a
translation algorithm to convert the input brain signals to generate control signals for the humanoid.
Traditionally, brain signals were solely taken as an input signal for this purpose. However, at times
it suffered from long training time and poor accuracy. One of the major factors that contributed to
this was the significant variation in the input signal. To improve the performance of such systems,
researchers have actively explored multi-sensor fusion in the past several years. Such systems are
often termed as hybrid BCI systems and they make control decisions based on the fusion of inputs
from various sensors. The use of this multi-sensor fusion has been shown to improve the robustness of
the BCI-based system [14,15]. The major contributions of this paper are as follows:
•
This paper reviews various applications in which a humanoid is controlled using brain signals for
performing a wide variety of applications such as grasping of objects, navigation, telepresence etc.;
•
For each of the applications, we discuss the overview of the application, system design, and results
associated with the experiments conducted;
•
Specifically in this review, we consider BCI applications which use just EEG signals (discussed in
Section 3), applications which use multisensor fusion where in addition to EEG, other sensor
inputs are also considered for execution of the desired task (Section 4), as well as augmented
reality-assisted BCI (Section 5);
•To the best of our knowledge, this work is the first review on BCI-controlled humanoids.
The rest of the paper is organised as follows: Section 2discusses the preliminary knowledge
required to understand the paper. Section 3discusses applications where a humanoid robot is
controlled using only brain signals. Section 4discusses humanoid control applications using hybrid
BCI. Section 5discusses a BCI-controlled humanoid application supported by Augmented Reality.
Section 6summarises the applications discussed in the paper. Section 7concludes the paper.
2. Preliminary Knowledge
This section discusses a few preliminary basics that are required to understand the works
described in the paper.
2.1. Brain-Computer Interface
Rehabilitation is one of the major areas where BCI finds its applications. BCI can act as a
communication mechanism for those with motor impairment. In the case of people with motor
impairment, their nervous system is not able to execute as per the brain’s signals. For example,
the brain may think of lifting the left hand, but due to a person’s left hand being paralyzed (on
account of nervous disorders), the hand may be unable to move. However, the signals from the brain
can be directly sensed using EEG electrodes, and can be used to control a robotic arm which may
imitate the lifting of the left arm [
16
]. Various works like [
17
–
30
] discuss several BCI applications.
Such applications have greatly motivated recent advances in BCI, for it offers new communication
possibilities for those who are paralyzed or suffer from various bodily disabilities. BCI works in
three stages. The first stage involves taking input from the brain, which is generally done using
Electroencephalography (EEG). The second stage consists of a translation algorithm that maps the
input signals from the brain to a predefined output command, and the third stage involves controlling
the external device based on the command [31–33].
Next we discuss these three stages in a little more detail. BCI Input (the first stage): This stage
consists of acquiring data pertaining to one or more features of the brain’s activity. Different parts of
Sensors 2020,20, 3620 3 of 23
the brain are responsible for processing different functions. For example, sensory functions related
to vision are processed in the occipital lobe of the brain. Furthermore, the frontal lobe is responsible
for planning, decisions, and making speech [
34
]. Depending on the desired action to be performed,
the EEG sensors can be used to acquire the brain signals from that portion of the brain for further
processing. The second stage namely the Translation Algorithm, takes the acquired brain signals as
the input, and translates them into a specific output command, which could be used for a particular
action. In particular, this stage involves using various classification algorithms like Linear Discriminant
analysis (LDA), Artificial Neural Networks (ANN), etc. (as discussed in Section 2.3) for classifying the
action into a particular category. The key features of the translation algorithm are the transfer function
used, its adaptability, and the control output generated. The transfer function can be linear (e.g., LDA)
or non-linear (e.g., Neural-Network). Adaptive algorithms can use sophisticated machine-learning
algorithms to adapt according to the brain [
35
]. The third stage, BCI Output, deals with the output.
The control output generated for application-specific devices can be of two forms: (i) Discrete or (ii)
continuous. The discrete output is the one that can be used for selection among fixed outputs (e.g.,
letter selection) while the continuous output can help in navigating (e.g., cursor movement) [17].
2.2. Hybrid BCI
Traditional BCI approaches were dependent on just using brain signals for generating output.
However, it is observed that the salient features of the brain signals could differ among various subjects.
In fact, sometimes even for the same subject, the features varied from trial to trial [
36
]. Also analyzing a
single aspect or feature can, at times, lead to missing out important information. These challenges make
the use of machine learning for specifying and extracting features from the signals very appropriate.
Machine learning has been used in various areas of application in the past to solve challenges of
diverse natures [
37
–
39
] and also find great applicability in solving challenges related to BCI signals.
Machine learning methods have been able to increase the decoding accuracy prominently as discussed
later in the paper. To maximise the robustness of the system, to increase the information transfer rate,
and to decrease the training time, th BCI system records and analyzes multiple complementary signals
[
40
,
41
]. These systems use data fusion techniques and use machine-learning algorithms for the fusion
of complementary signals. This technique is termed a Hybrid BCI, as demonstrated in Figure 1.
Figure 1. Block diagram of Hybrid Brain-Computer Interface (BCI).
Any Hybrid BCI system must fulfil four major criteria, that are as follows [42,43]:
Sensors 2020,20, 3620 4 of 23
1. Brain signals must be used in the BCI System;
2. The user should be able to control one of the brain signals intentionally;
3. The BCI System should do real-time processing of the signal;
4. User must be provided with the feedback of the BCI output.
Generally combinations of a signal used by Hybrid BCI include a mix of Electromyography [
44
]
(EMG) + Electroencephalography (EEG), Event-Related Desynchronization (ERD) along with Steady
State Visual Evoked Potential (SSVEP), Near-Infrared Spectroscopy (NIRS) along with EEG, ERD along
with P300 etc [
45
,
46
]. Table 1lists the description of the major signals and methods discussed above
[47–49].
Table 1. Comparative Analysis of Various Methods used for Recording Features.
S.No. Method Description Characteristics
(1)
Electroencephalography
(EEG)
Measuring the electric signals
produced by the human brain
- Commonly used method.
- Safe and affordable
- Poor spatial resolution
(a) Evoked
Signals
SSVEP
Brain signal generated in response
to looking at source having a
specific frequency of flickering
- Training time is short
- Requires continuous attention
for stimuli
- Exhausting for user after
long sessions
P300
Signal generated in response to
an infrequent stimulus, recorded
with a latency of 250–500 ms
(b) Spontaneous Signals Voluntary signals generated
without external stimulus
- External stimuli not required
- Long training required
(2) Electromyography
(EMG)
Measure the electrical activity
produced by skeletal muscles
- Easy to record
- More noise contamination
(3) Electrocorticography
(ECoG)
Measuring the electric signals
by placing electrodes
beneath the skull
- Better signal quality than EEG
- Risky (semi-invasive)
- Less Common
(4)
Functional magnetic
resonance imaging
(fMRI)
Measure changes in the
metabolism of the brain
(e.g., oxygen saturation)
- Good spatial resolution
- Poor temporal resolution(1 s–2 s)
- Sensitive to motion
(5) Near-Infrared
Spectroscopy (NIRS)
- Good spatial resolution
- Poor temporal resolution(2 s–5 s)
2.3. Classification Algorithms
A major requirement of the classifiers in the BCI systems is to ensure good performance in terms
of classification accuracy [
50
]. For example, let us take the case of a patient using a BCI-controlled
wheelchair. Now suppose they have the facility to control the BCI wheelchair by taking it left, right,
front, or back based on their thoughts. So when they think that the wheelchair should move left, the
BCI system should be able to process the brain signals appropriately and must classify the action
to be ‘move left’. This classification algorithms have the task of taking multiple features (e.g., brain
signals) as an input and to distinguish between different classes (e.g., left, right, front, back in the
example given here). In performing this task, it is important to choose features carefully so that the
classification algorithm can significantly differentiate between the multiple classes [
51
]. The feature
that acts as an input to the BCI system for controlling humanoid robots are of two types: (i) Temporal
features or (ii) frequency features. Temporal features represents the amplitude of the generated signals
with time, whereas frequency features represent the frequency power spectra of the signals. Generally,
P300-based BCI uses temporal features whereas ERD- and SSVEP-based BCI uses frequency features.
Classification: Different classifiers are used to translate the features extracted from brain signals
to control commands [
52
–
55
]. These classifiers range from the simplistic linear classifiers to complex
Sensors 2020,20, 3620 5 of 23
non-linear classifiers. Some of the commonly used classifiers are: (i) Linear Discriminant analysis
(LDA), (ii) Support Vector Machines (SVM), (iii) Artificial Neural Networks (ANN), and (iv) Statistical
classifiers [56]. These classifiers are discussed in detail below.
Linear Discriminant analysis (LDA) [57]:
LDA is a type of linear classifier. The major benefits
of using LDA is that: (i) The computational complexity of LDA is less, and hence the time taken
for the classification is reduced. This is useful when using the algorithm in an online session as
discussed later. (ii) LDA is a simple classifier to use and visualise. Linearity can be a limitation while
handling non-linear EEG data. On the other hand simpler techniques like LDA are suitable when
small training data set is available. LDA is used in a number of BCI-controlled humanoid applications
for classification. Typical decision boundary of LDA is shown in Figure 2. For LDA, decision boundary
are singly connected and convex. Figure 2denotes 3 class classification in which the colour of the
region denotes the class being predicted.
Figure 2.
Decision boundaries for the different classifiers (Linear Discriminant analysis (LDA), Support
Vector Machines (SVM), and Artificial Neural Networks (ANN)).
Artificial Neural Networks (ANN) [58,59]:
ANN is a type of non-linear classifier. The classifier
is inspired by the neuron structure of the brain. It is used to approximate non-linear functions. Using
ANN is generally computationally intensive and requires a number of parameters to be configured.
It is more complex in terms of usage as compared to LDA and the computational time taken to
generate the output is also longer. However, ANNs are highly adaptive and can be applied on a
wide variety of use-cases. Unfortunately, ANNs are prone to over-fitting, and thus the selection of
the parameters/architecture and regularisation needs to be done carefully. The decision boundary of
ANN can be seen in Figure 2, the non-linearity of the function is evident from the figure. The figure
shows two classes, one represented using red colour and the other one using a blue colour that has
been classified using ANN.
Support Vector Machines (SVM) [57,60]:
SVM is also a non-linear classifier. However, while
using SVM, setting up of the configurations is not needed. It is useful in cases when the training
data is less. Most of the time it generalises better. This makes its use advantageous for BCI systems
as the classifiers once trained, classify brain signals for multiple sessions. The features generated
during multiple sessions may vary even for a single user. Hence the models which are less sensitive to
over-fitting may perform better. SVM also performs well with high dimensionality data. However,
SVM are sometimes slower than other classifiers, which becomes an issue while dealing with large
data. Decision boundary with maximising margin between the classes is shown in Figure 2.
Statistical Classifiers:
These classifiers [
61
] use posterior probabilities to select the class that has
the highest probability based on the input features of every new instance. This type of classifiers
utilise prior knowledge to classify instances. These classifiers also perform well in case of uncertainty,
which is expected when dealing with brain signals. Uncertainty of the signals can be caused by fatigue
or learning effects.
Table 2summarises the typical classifiers that are applied in BCI.
Sensors 2020,20, 3620 6 of 23
Table 2. Comparison of classification algorithms.
Classifier Mechanism Properties Choice Consideration
Linear
Discriminant
Analysis
(LDA)
Decision boundary is made
by maximising the mean
among two class and
minimising the variance
inside each class.
1) Simple
2) Less computational
3) Decision boundary is linear
- Suited for online sessions
- Smaller training set
Artificial
Neural
Networks
(ANN)
Minimises the error in
classifying training
data by adjusting weights
of neural connections
1) Many parameters to set
2) Highly computational
3) Decision boundary is non-linear
4) Prone to overfitting
- Suitable for variety of
applications
- Sensitive to noisy data
Support
Vector
Machines
(SVM)
Decision boundary
maximises the
margin between two class
1) Decision boundary can
be linear or non-linear
2) Less prone to overfitting
3) High computation for
non-linear cases
- Appropriate for high-
dimensional data
- Less sensitive to
noisy data
Statistical
Classifiers
Estimates probability
corresponding to each class
and selects the class having
the most favourable possibility
1) Decision boundary is non-linear
2) Efficient for uncertain samples.
- Suited as adaptive
algorithm
- Considers variation in
brain dynamics (e.g., fatigue)
2.4. Humanoids
A humanoid robot is a robot with a body structure and features similar to that of a human.
Three main primitives for a humanoid robot are sensors, planning, and control. Humanoid robots
generally have proprioceptive sensors to sense the position and exteroceptive sensors to get data
on what is being touched. Actuators in humanoid robots mimic the action of muscles and joints.
Following is a list of the humanoid robots, which have been commonly used in BCI-controlled
humanoid applications in the recent past as shown in Figure 3. NAO (Nao Humanoid) humanoid [
62
],
which is developed by Softbank robotics is one of the most commonly used and is actively used for
research and educational purposes.
Figure 3. Humanoids: (a) NAO (Nao Humanoid), (b) HRP 2, (c) KT-X, and (d) DARwIn-OP.
1. Nao Humanoid (Softbank Robotics) [62];
2. HRP-2 Humanoid (Kawada Industries) [63];
3. KT-X Humanoid (Kumotek Robotics) [24];
4. DARwIn-OP (Robotis) [64].
Sensors 2020,20, 3620 7 of 23
In general, the humanoid robots in the list above have the following set of characteristics:
1. 17–30 degrees of freedom;
2.
Multiple sensors like gyroscope, force sensors, etc. on different body parts like head, torso, arms, legs;
3. Microphones and speakers to interact with humans;
4. Two cameras for object detection and recognition (in NAO);
5. Custom application development due to open architecture.
Figure 4gives an overview of the BCI-controlled humanoid applications discussed in the paper.
Majorly, P300 signal is used in these applications as it gives high accuracy [48,65].
Figure 4. Overview of applications.
3. BCI-Controlled Humanoid Applications Using Only EEG
In this section, we discuss various BCI-controlled humanoid applications that use only the
EEG signal as an input. The EEG input is processed and translated to an appropriate control
output. Specifically, we consider three applications, namely grasping a glass of water, telepresence,
and museum guide application using the BCI-controlled humanoid. These applications are discussed
in the following subsections one by one. For every application, we provide an overview, system design
description followed by the salient results associated with the conducted experimentation.
3.1. Grasp a Glass of Water using NAO (Type: Rehabilitation)
Overview: This application [
66
] involves using a BCI-controlled humanoid to grasp a glass of
water. This kind of application can be helpful for people who may find difficulty in performing such
a task because of their age or a serious medical condition like Amyotrophic Lateral Sclerosis (ALS)
disease. Note that ALS patients depend completely on caretakers for their daily needs. Scientists and
researchers have always been actively looking forward to developing technologies to help such patients.
A promising technology in this direction is the use of BCI-controlled humanoid robot. The authors in
[
66
] use an EEG-based approach to capture the brain’s activity, which is recorded through electrodes
implanted in cortical neurons. The signals were processed to actuate the humanoid to fetch the water.
Salient state changes in their system are shown in Figure 5. The experiments for the BCI humanoid
control for this task were performed by both healthy individuals as well as those suffering from ALS,
and they was divided into multiple sessions, namely: (i) Calibration Session, (ii) Online Session, and
Sensors 2020,20, 3620 8 of 23
(iii) Robotic Session. The purpose of dividing the experiment into multiple sessions was to tune the
signal processing parameters as well as the classifier before performing the actual task in the Robotic
Session. This is necessary because the parameters are dependent on the subject performing the tasks.
This also helps the subjects to get familiar with the system. Description of each session is given in
Table 3. Note that in Table 3, the threshold refers to the percentage of correct command selection that is
required to transition from one session to the next one. Feedback indicates whether the visual feedback
about the correctness of command was provided in the session. Accuracy is the ratio of correctly
executed commands to the total number of commands. In this experiment an ERP approach known as
the oddball paradigm [
67
] was used, which uses visual evoked potential. The oddball paradigm is
an experimental design in which the subject is exposed to a sequence of repetitive stimuli which is
infrequently interrupted by a deviant stimulus. The reaction of the subject to the oddball stimulus is
recorded. In this case study, oddball paradigm is used to identify the infrequent visual stimuli that are
elicited by highlighting the grid in the User Interface UI (Figure 6) of user’s interest. The P300 brain
signals are eminent after approximately 300 ms of the stimulus.
Figure 5. State diagram of process (adapted from: [66]).
Table 3. BCI Sessions used in [66].
Session Trials Threshold &
Feedback Purpose Accuracy (In %)
(Mean ±Standard Deviation
Calibration 9 100%
No Feedback
For tuning signal
processing parameters -
Online 20 55%
With Feedback
Train the
classifier
Healthy: 74.5 ±5.3
Amyotrophic Lateral Sclerosis (ALS) Patient: 69.75 ±15.8
Robotic 10 N.A.
With Feedback
Robot Executes the
selected command
Healthy: 72.4 ±9.4
ALS Patient: 71.25 ±17.3
System design: The system consisted of three major components. These were the user interface,
the network interface, and the robotic system. The user interface used was a 3
×
3 matrix, as shown in
Figure 6. Each grid in this figure represents an action performed by the humanoid. The interface shows
two types of commands. The first set of commands are to control the movement of the humanoid robot
in the environment, i.e. (forward, backward, turn, etc.) and the second set of commands are to grasp
and give items. The grids showing the hand icon in Figure 6correspond to the grasp and give actions,
while the rest of the grids correspond to different movement commands. BCI data acquisition system,
along with the user interface, collect the EEG signal using a g.USBamp EEG kit digitalised at 256 Hz.
Various filters like notch and Butterworth filter were used to strengthen the signal and to remove the
noise. The machine learning algorithm used for classification was stepwise LDA using the One vs Rest
approach. The One vs Rest approach takes one class as positive and the rest as negative and trains the
classifier. The One vs Rest approach was used for selecting the class with the maximum distance from
hyperplane compared to all the other classes [
66
]. The network interface passed the commands from
the BCI system to the robotic system. The application part was completely dependent on the robotic
system, which allowed two types of control modes. Both modes are illustrated in Figure 7.
Sensors 2020,20, 3620 9 of 23
Figure 6.
3
×
3 matrix showing user interface. (adapted from: [
66
]). (
a
) Teleoperated Mode: User gives
directional command using only arrows and (
b
) Autonomous Mode: User gives high-level commands
corresponding to the symbol.
Figure 7. Autonomous and teleoperated mode (adapted from: [66]).
•
Teleoperated Mode: In this mode, the user controls the movement of the robot and also gives
commands to grasp and give a glass of water;
•
Autonomous Mode: In this, the user would just give abstract commands and the humanoid plans
its actions according to the state.
Results: The experiment showed that the BCI system, along with humanoid robots, can be
effectively used by ALS patients with a mean accuracy of 71.25% in robotic session. Additionally, one
of the interesting observation about the experiment reported by the authors was that the experimental
setting (i.e., experiment conducted at home or with lab setting) did not affect the control
performance significantly.
3.2. Telepresence by Humanoid Using P300 Signal (Type: Entertainment)
Overview: The application discussed in the previous section was simpler in terms of the actions
performed, but provided a granularity of control that is sometimes not desired at the user level.
This section discusses one such application in which a person is able to interact with the world using
telepresence through a humanoid [
68
]. The control commands to be given to the humanoid in this case
are high level, i.e. humanoid perform several subtasks that are grouped together and denoted as one
high-level task (event, a few of such events can be seen in Figure 8a). Two major techniques used for
the implementation of this application were (i) programming by demonstration in which the robot
Sensors 2020,20, 3620 10 of 23
learns a task by observing someone performing it, and (ii) BCI-based control in which the brain signal
generated by the visual stimuli is converted to control signals by classifying the P300 signal generated.
In this experiment, similar to the previous experiment (i.e., Section 3.1), the complete process was
divided into two sessions illustrated in Table 4. The two sessions are namely: (i) Calibration session
and (ii) real-time operation. The part of training the classifier was performed in the calibration session
using the same EEG data, which in the previous case-study was performed in a separate session named
online session. This experiment also used the oddball paradigm method for elicitation of the brain
signals. However, as compared to the previous case study, the number of commands were increased to
16. All the commands used are high level, and are depicted in Figure 8a. The purpose of doing that
was to remove the complexity of the humanoid control from the user end. Logistic regression was
used for the classification of signals. It was used to train the function for predicting the output into the
target or non-target events [
68
]. For the validation of the trained model, the subjects were asked to
control the humanoids by brain signals. The set of tasks to be performed were pre-decided.
Table 4. Experiment sessions used in [68].
Session Trials Feedback Purpose Accuracy (In %)
Calibration 5 With
Feedback
Tune Signal Processing
Parameters & Train Classifier -
Real-Time - With
Feedback
Control the Humanoid
Robot. 78
System Design: Figure 8b shows the abstract system design of the entire system. Some of
the functionalities from the actual architecture have been grouped in the diagram to focus on key
components. FieldTrip buffer is the main driver of the whole architecture, and it manages both, the BCI
system as well as the NAO system. It also stores the BCI model. The subject uses the Graphical User
Interface (GUI) to generate brain signals recorded using, g.USBamp, g.LADYbird with 256Hz sampling
frequency and 16 bit resolution. Signals are passed on to the BCI module for either tuning/training the
model or for classification.
Figure 8.
(
a
) 4
×
4 Grid showing high-level commands and (
b
) abstract system pipeline for telepresence
(adapted from: [68]).
Results: During the calibration session, the model is trained and stored in the buffer. During
real-time operation, the stored model is used to classify signals. Based on the classification, the events
are generated and passed onto the NAO humanoid as control commands. The feedback of the same is
shown on the user’s screen. The system achieved a real-time accuracy of 78% on average.
3.3. BCI Operated Museum Guide (Type: Entertainment)
Overview: This application [
69
] uses a remotely controlled robot that was operated by a healthy
or paralysed person through BCI. The aim is to use the robot as a museum guide that will send remote
Sensors 2020,20, 3620 11 of 23
visuals to the person operating it through BCI. In the application, the person could use the P300
signals to control the navigation of the robot. This provided the user with a perception of telepresence,
similar to the previous case study. Note that although the authors did not use humanoid in their case
study, a humanoid could very much be used in such an application, and thus the case study has been
included. In this experiment, more focus was given on the GUI used in the BCI system. The GUI is
different as it is more friendly for the user and is not aligned as a grid, like the UI used in previous case
studies. The proposed BCI system used the P300 brain signal and the details about the BCI sessions
are not discussed. In the new GUI, the selection of command was done by focusing on the flashing
navigation arrow. This is similar to the oddball paradigm used in earlier experiments. To simplify
the UI, the authors divided the process of selection into two parts. Each part has a different P300
elicitation interface. The first part is before starting with the input phase. In this, the user was asked
to select between the two robots: Peoplebot and Pioneer3 depending upon the location they want
to visit. In the application discussed, Peoplebot was located in the Computer Science department,
and Pioneer3 was located in the Botanic garden. Both the robots were equipped with wheels for
movement, micro-controller, IR sensors, sonar rings for avoiding collision and a camera. In general,
the first part could be considered as a selection among two robots, Robot 1 and Robot 2, which were
located at two different locations. The user could select the robot as per their preference to visit a
location as shown in Figure 9a. After the selection of the robot, the navigational instruction was given
using a screen, as shown in Figure 9b. The arrows represent the direction of the robot’s movement,
which was continuous, and could be stopped using the stop button. All this was controlled using
the brain signals based on P300. The screen in the middle displays the output generated using the
robot’s camera.
Figure 9.
(
a
) Robot selection menu, (
b
) navigation screen, and (
c
) two views for the user (adapted
from: [69]).
System Design: The communication pattern between the robot and BCI System follows
client-server architecture and Transmission Control Protocol / Internet Protocol (TCP/ IP) is used in
the network stack. Robot plays the role of the client, and the BCI system acts as a server. Initially, the
robot tries to establish a connection with the BCI System and waits for the command to be executed.
The BCI Architecture converts the signal from the brain into the corresponding command; the server
then sends the command to the client program running at the robot end. The robot can handle three
types of commands in general: (i) Start Session Command, (ii) Execution Command, and (iii) End
Session Command. When the client-server connection is established “Start" command is received
by the robot which enables direct control of robot through brain signals. This control is stopped by
Sensors 2020,20, 3620 12 of 23
receiving the “End" command. At the server end, after sending the command to be executed, the server
waits for the action to be executed. If the action is done, the server will get the result of the action from
the client. However, if the command is not correct, the client will send a warning command to the
server, and the server will respond by the same command.
Results: Using this application, a person could visit the museum through the robot because of
telepresence. It was possible to simulate where the robot walked with the help of a two-dimensional
map. The person could see the FOV (Field of View) of the robot’s camera with the help of a graphical
user interface shown in Figure 9c and then decide the next displacement. Path planning could be done
to avoid the sensor’s errors.
4. BCI-Controlled Humanoid Applications Using Hybrid BCI
In this section, in addition to the brain signals recorded using EEG, the control command is also
dependent on complementary signals generated by some other parts of the body. We discuss two case
studies in this section.
4.1. Picking Objects Using Neuro-Biological Feedback Fusion (Type: Rehabilitation)
Overview: The application [
70
] discussed in this section is similar to the one in which glass of
water is fetched. However, the major difference is that this uses multi-sensor data for classifying
the control commands. The authors discuss a new method for a human-humanoid interaction for
ALS-affected patients. The authors make use of the biofeedback factor, which depends on the user’s
intention, attention, and focus. This was then used to recognise the user’s mental state, based on which
the robot was directed to do certain tasks.
The task performed in this application is very similar to [
66
]. Similarity can also be seen in the way
the experiment was divided into Training Session, Online Session, and Robotic Session as discussed in
Table 5. These sessions were combined with the biological feedback to support the decision making
based on a certain threshold. The biological factors were used as it provides the mental state of the
user. The architecture uses a combination of EEG signals which are elicited using visual stimuli along
with a tracker that tracks the user’s eye movement. This biofeedback based system is used to extract
features such as attention, intention and focus. Figure 10b shows the actual workflow. The task of the
experiment was to grasp a glass of water.
Table 5. Experiment sessions used in [70].
Session Trials Threshold &
Feedback Purpose Success Bio-Feedback
Factor
Calibration
Till 100%
correctness
(Avg. : 3)
100%
No Feedback
Calibrate BCI System
over the neural response - -
Online 10 -
With Feedback
Select the command
with visual feedback
Healthy: 100%
ALS: 97.22%
Healthy: 78.15%
ALS: 79.61
Robotic 5 -
With Feedback
Select the command
with robotic feedback
Healthy: 100%
ALS: 96.97%
Healthy: 75.83
ALS: 84.25
System Design: NAO humanoid is used along with BCI system that includes a bio-signal
amplifier which is used to convert the user’s brain signals into digital form and a tracker which tracks
the location of the focus of user’s eye as shown in Figure 10a. Components of the System are as follows:
1. BCI system:
Visual Evoked Potentials (VEPs) and P300 are used. Oddball paradigm is used for eliciting ERPs.
The salient features of the system were as follows:
Sensors 2020,20, 3620 13 of 23
Signal Processing: g.USBamp device was used for recording the signals, using 10–20 standard
system. The signal was digitised at 256 Hz. Butterworth filter was used to reduce the artefacts.
A temporal filter was also used to average the samples in order to reduce the noise. In this study,
6 epochs each with a window of 800 ms were used.
Feature extraction: Fisher’s stepwise Linear discriminant is used during the training to configure
according to the user’s brain. LDA was used to differentiate the different classes by using
hyperplanes. In this application, LDA calculates the stimuli recorded for every action on the grid
and then selects the most prominent action corresponding to the grid.
User Interface: It is similar to the 3
×
3 grid, which was used in [
66
] (Figure 6). Low-level
behaviours include controlling all the possible directional movements of the humanoid. However,
high-level behaviours include issuing control commands like holding some item and giving the
held item, similar to the ones considered in [66].
2.
Biofeedback system uses neurological states and gaze: The biofeedback system takes into account
the user’s eyes and brain activity. It includes four parameters—Mental intention, attention, visual
focus, and stress. An action is executed only when the biofeedback factor (B
f
) is greater than 60%.
The various modules associated with the bio-feedback system are explained below:
Attention module: Since there are nine commands, Fisher’s Linear Discriminant (FLD) is used with
one versus rest approach. The attention is expressed in percentage and is it based on the power of
P300 waves measured during performing the task.
Intention module: Correlation factor of the P300 wave is used to measure intention. It is based on
the precision of the system.
Visual focus module: It is calculated by evaluating the user’s gaze by eye-tracking, as shown in
Figure 10a. Here F
c
represents the central focus, F
l
is the lateral focus, and F
o
is the outer focus;
all values are in the form of a percentage.
Entropy module: Stressful Condition corresponds to high entropy in brain signals.
Signal processing steps are performed to extract the normalised value of the entropy. Finally value
Bfis calculated by taking a weighted average of attention, intention, and visual focus values.
3.
Connection of the subject to the robot: For receiving commands from the BCI, User Datagram
Protocol (UDP) connection is made to the control interface. Connection to the robotic system is
made through TCP/IP socket for reliability.
4. Controlling the behaviour of the robot. Two control modes are proposed by the authors:
•
Navigation mode: NAO can move in 6 ways namely walking (front & reverse direction) ,
turning ( left & right), and rotating (clockwise & anti-clockwise).
•
High-level mode: It includes complex tasks like holding on to an object, and giving the object
to the user after identifying the user’s location.
The distance metric (O) is also used to avoid collisions based on a threshold value. If distance
metric is less than the threshold value, then is considered safe to execute a command. Once that
is ensured, corresponding to that an reaction safe command is activated along with the biological
factor B
f
and
O
which is passed to function which finally executes the command R
k
that
corresponds to the control command.
Results: In the experiment, the biological factor represents the mental state of the user. The average
value of attention, visual focus and intention for healthy users during the online session were 74.59%,
99.03%, and 43.52%, whereas for ALS users the values were 76.70%, 90.81%, and 63.01%. During the
robotic session the average values for these parameters for healthy users were 69.60%, 98.49%, and
42.98%, and ALS users achieved 79.45%, 96.16%, and 70.03% respectively. The attention and intention
value for ALS users was better than healthy users. The B
f
value also increased in the robotic session
Sensors 2020,20, 3620 14 of 23
for ALS users. This denotes that the presence of robot in the robotic session acts as a positive feedback,
particularly for ALS users, supporting studies like [
71
,
72
]. The same can also be attributed to better
attention and intention among ALS users.
Figure 10.
(
a
) Eyeball tracking in grid cell , (
b
) flow chart of the system using neuro-biological fusion
(adapted from: [70]).
4.2. Humanoid Control using Facial Signals (Type: Entertainment)
Overview: This application [
73
] uses three types of bio-electric potentials, i.e. EOG (electric
potential generated by eye movement), Glossokinetic Potential (GKP, the electric signals originated
by tongue movement), and EMG. Although the application discussed here uses these three signals,
as an EEG-based system is used for signal acquisition. Thus, the BCI data also can be made use
of. With that integration, the system can utilise all the electric potentials generated from the entire
head region. Application designed can identify two types of tongue movements, i.e. left-to-right and
right-to-left, and two kinds of horizontal eye movements similar to the tongue movements, along with
these two teeth-clenching movements generate EMG signals that are also used. By analysing these
electric potential signals recorded from different parts of the face, a two-level interface is controlled.
Eye movement selects a generic task category whereas the tongue movement selects a specific task
from the category. Finally, teeth clenching executes the task. In the application, authors developed a
mechanism that can detect and distinguish between the tongue and eye movements, and differentiate
the direction of the movement of either tongue or eye. Basically, this means there are four types of
movements which have to be distinguished accurately. These types are namely: (i) Tongue (left to
right), (ii) tongue (right to left), (iii) eye (left to right), and (iv) eye (right to left).
System Design: The experiment consisted of two phases, training and online. Table 6consist of
more details. For the training part, both eye and tongue movement were recorded for seven rounds
(trials). g.Mobilab device was used for recording. This device has the facility of recording EEG, EOG,
EMG, and GKP singals in this experiment. The signals were digitised at 256 Hz and filtered above
0.5 Hz using the high-pass filter.
Table 6. Phases of experiment in [73].
Session Trials Purpose Accuracy
Training 7 (Eye
& Tongue)
To train the
detection model -
Online 1 To evaluate the performance
of the system 86.7 ±8.28%
Sensors 2020,20, 3620 15 of 23
For eye-movement, auditory cues were used to guide the user, whereas visual cues were used in
case of tongue movement. A RBF-SVM (Radial Basis Function-SVM) model was trained for classifying
the four kinds of movement. It was used because it has an enclosed decision boundary and can be used
to reject irrelevant artefacts generated due to the motion of the electrodes. The distinction between
tongue and eye movements was obtained using PCA based feature extraction. For the online part,
the authors evaluated the experiments in terms of: (i) Performance (accuracy and response time), (ii)
task execution (this method has been extensively used in other case studies as well for evaluation,
in which the user is asked to perform a set of tasks on the robot), and (iii) workload (to measure
qualitative parameters). Figure 11a shows the two-level hierarchical menu displayed on the user screen
to allow them to control the interface, as shown in Figure 11b. All the similar tasks are grouped in
the two-level interface under a category. By default, the task in the category at the central position
of the screen is highlighted which can be executed by a teeth-clenching movement resulting in the
generation of EMG signal. For navigation among the categories, eye movements (left to right) and vice
versa are used. Furthermore, for navigation within a category, tongue movements (left to right) and
(right to left) are used. Eye: Left to right movements moves the category selection in the clockwise
direction, whereas the right to left movement will move in an anticlockwise direction. Within task
categories, a specific task was selected by the tongue movements. After the selection of the task was
made, the execution was done by teeth clenching movement. All the categories and one of the task
used in [73] along with the transitions are shown in Figure 11b.
Results: The mean accuracy of the system was 86.7
±
8.28% with an average response time of
2.77 ±0.72 s
. This scheme can be supported with facial recognition for expression recognition [
74
] and
can be integrated with some of the action commands to increase robustness.
Figure 11. (a) Menu for selecting task, and (b) state diagram (adopted from: [73]).
5. Application Using BCI Supported by Augmented Reality (AR)/Virtual Reality (VR)
In this section, the application discussed uses augmented reality to create a sense of embodiment
and is used to have greater control over the environment.
5.1. Navigational Assistance using AR & BCI (Type: Rehabilitation)
Overview: In this application discussed in [
75
], a novel navigation scheme is presented to control
a humanoid through BCI enabling it to interact with the environment. SSVEP signals are used in
this study. For interaction with humans, a high level of accuracy is desired. This is achieved using a
sequence of manual and automated phases presented in the assistive navigation scheme. HRP-2 robot
is used in this demonstration.
The authors focus majorly on demonstrating a new navigation scheme that is assisted with a
Head-Mounted Display (HMD) to increase the sense of embodiment by displaying the robot’s camera
Sensors 2020,20, 3620 16 of 23
video feed to the user. The humanoid control is done by generating control commands using the SSVEP
paradigm. The elicitation of SSVEP is also done with the help of HMD. The navigational assistance is
achieved by executing a sequence of manual and automated phases. In general, the selection-based
phases are assigned to the user, whereas navigation and interaction-based tasks are automated to
achieve high-level accuracy while interacting with humans.
System Design: The experiment [75] is divided into five phases, as shown in Figure 12.
Figure 12. State diagram of assistive navigation (adapted from: [75]).
Major characteristics of these phases are listed below:
1.
Manual navigation phase—This is a manual phase that requires the task to be performed by a user.
The phase is limited to the user locating himself using the robot’s camera. The output of the
camera is visible in the HMD;
2.
Body part selection phase—This phase is also performed by the user manually. In this phase,
the user selects the body part which the humanoid robot is expected to interact with;
3.
Assistive navigation phase—This is an automated phase. The Robot uses SLAM [
76
] to navigate
towards the selected body part. The experiment also shows that this kind of navigation is better
because of the difficulty associated with manual navigation which causes errors in navigation
along with slow execution of the task;
4.
Interaction selection phase—This is a manual task. The user selects the type of interaction on the
body part selected;
5.
Interaction phase—This is an automated phase. The humanoid performs minor adjustments to
perform the interaction. In this particular application, a user’s arm is touched. But in general,
any task can be configured in the humanoid, and it will execute the task when triggered.
The navigational assistance system consists of a HMD which is responsible for displaying live
video feed and for the elicitation of SSVEP signals to generate control commands. AR markers were
placed on the HMD and user arms which helps in performing the automated phases. As shown in
Figure 13a, SSVEP was evoked by flickering the body parts, which was used for body part selection
by the user. g.USBamp was used to acquire the data with a sampling rate of 256 Hz combined
with band-pass filter (0.5–30 Hz) and notch filter (50 Hz). Similarly, SSVEP was evoked during the
interaction selection phase as well. Finally, as shown in Figure 13b the robot adjusted itself by small
steps. The robot initiated the action when it reached a comfortable pose.
Results: The task for this application was touching the user ’s arm, as shown. The system operated
at an accuracy of more than 80% with a training of about 6 minutes.
Sensors 2020,20, 3620 17 of 23
Figure 13. (a) SSVEP for arm selection and (b) interaction phase (selected arm is touched).
6. Summary of Applications
In this paper, we discussed various applications that deal with controlling a humanoid with
the help of BCI signals. These experiments were performed using various humanoids and different
translation algorithms were used to generate the control signals. Table 7presents the summary of the
studies considered in this review.
Table 7. Summary of applications.
Name Related
Works
Used
Signal Classifier Humanoid
Used Description
Fetching Water
(Rossella et al.,
2017) [66]
[77–80]P300 Stepwise
LDA
NAO
Humanoid
Humanoid fetches a glass of water
for a patient using BCI-P300
Telepresence
(Batyrkhan et
al., 2018) [68]
[81–87]P300 Logistic
Regression
NAO
Humanoid
A user can interact with the world
remotely using humanoid
controlled by BCI
Museum
Guide (Antonio
et al., 2009) [69]
[88,89] P300 N.A. PeopleBot &
Pioneer3
A user can control a robot to visit
a museum remotely
Picking Object
(Bio-Feedback)
(Rosario et al.,
2018) [70]
[90–93]
P300 +
Eyeball
Tracking
Stepwise
LDA
NAO
Humanoid
Picking & placing objects.
But control signals are
generated based on
bio-logical feedback & brain signal
Control by
Facial Signal
(Yunjun et
al., 2014 [73]
[94–99]
EOG,
EMG,
GKP
SVM NAO
Humanoid
Humanoid is controlled by
facial signals which do not depend
on spine for signal delivery
Navigational
Assistance
(Damien et al.,
2014) [75]
[100–106]SSVEP N.A. HRP-2
Humanoid
A navigational scheme is presented
to have greater precision while
performing action using humanoid
7. Conclusions
BCI has emerged as a new communication system and is an active field of research. This paper
discussed BCI-controlled humanoid applications of three kinds: a. The ones using just EEG signals,
b. using Hybrid BCI, and c. Augmented reality-assisted BCI humanoid control. Section 3discussed
Sensors 2020,20, 3620 18 of 23
three applications that make use of P300 signals as an input for classification. These signals were
generated using a grid like user interface denoting different actions. Section 4covered two application
which combine input from multiple sensors to increase the robustness of the system. The application
performed in Section 3.1 and Section 4.1 are similar. However, the application in Section 4.1 used
neuro-biological feedback to accomplish the task, and had better accuracy on account of using multiple
inputs. Application in Section 5used augmented reality to demonstrate a navigation scheme that could
be controlled from a head mounted display. Most of the applications discussed in this paper deals with
increasing the quality of life of a person with paralysis or motor impairment, though it could also be
beneficial for a healthy person in some cases. Current applications have experimented with objectives
ranging from, accompanying a patient to fetch a glass of water using humanoids to using augmented
reality for humanoid control. Major issues faced while implementing each of the applications was the
process of training and calibration which takes time. Most of the complimentary techniques deal with
reducing the training time and improving the online accuracy while performing the action. This paper
reinforced the fact that BCI could be used to control the humanoid with a good amount of accuracy.
In most of the applications discussed, this was achieved by dividing the experiment into phases and
having an initial training phase to tune the model according to the subject.
Author Contributions:
Conceptualization, V.C. and A.V.; Methodology, V.C. and A.V.; software, V.C. and A.V.;
validation, V.C. and A.N.; formal analysis, V.C. and A.V.; investigation, A.V. and V.C.; resources, V.C. and A.V.;
data curation, A.V. and V.C.; writing—original draft preparation, A.V. and V.C.; writing—review and editing,
A.N., E.H. and V.C.; visualization, V.C.; supervision, A.N. and E.H.; project administration, A.N. and V.C.; funding
acquisition, E.H. and A.N. All authors have read and agreed to the published version of the manuscript.
Funding:
This work is supported by BITS Additional competitive Research Grant funding under Project Grant
File no. PLN/AD/2018-19/6 for the Project titled “Brain Computer Interface Controlled Humanoid”.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
Mantri, S.; Dukare, V.; Yeole, S.; Patil, D.; Wadhai, V.M. A Survey: Fundamental of EEG. Int. J. Adv. Res.
Comput. Sci. Manag. Stud. 2013, Volume 1, Issue 4, 1-7.
2.
Pfurtscheller, G.; Neuper, C.; Guger, C.; Harkam, W.; Ramoser, H.; Schlögl, A.; Obermaier, B.; Pregenzer, M.
Current trends in Graz Brain-Computer Interface (BCI) research. IEEE Trans. Rehabil. Eng. 2000, 8, 216–219,
doi:10.1109/86.847821.
3.
Nicolas-Alonso, L.F.; Gomez-Gil, J. Brain Computer Interfaces, a Review. Sensors
2012
,12, 1211–1279,
doi:10.3390/s120201211.
4.
Hirai, K.; Hirose, M.; Haikawa, Y.; Takenaka, T. The development of Honda humanoid robot. In Proceedings
of the 1998 IEEE International Conference on Robotics and Automation (Cat. No.98CH36146), Leuven,
Belgium, 20–20 May 1998; IEEE: Piscataway, NJ, USA, 1998; Volume 2, 1321–1326.
5.
Brooks, R.; Breazeal, C.; Marjanovi´c, M.; Scassellati, B.; Williamson, M.M. The Cog Project: Building a
Humanoid Robot. In Computer Vision; Springer: Berlin/Heidelberg, Germany, 1999; Volume 1562, pp. 52–87.
6.
George, M.; Tardif, J.-P.; Kelly, A. Visual and inertial odometry for a disaster recovery humanoid. In Field and
Service Robotics; Springer: Cham, Switzerland, 2015; pp. 501–514.
7.
Kakiuchi, Y.; Kojima, K.; Kuroiwa, E.; Noda, S.; Murooka, M.; Kumagai, I.; Ueda, R.; Sugai, F.; Nozawa, S.;
Okada, K.; Inaba, M. Development of humanoid robot system for disaster response through team nedo-jsk’s
approach to darpa robotics challenge finals. In Proceedings of the 2015 IEEE-RAS 15th International
Conference on Humanoid Robots (Humanoids), Seoul, Korea, 3–5 November 2015; IEEE: Piscataway, NJ,
USA, 2015; pp. 805–810.
8.
Vukobratovi´c, M. Humanoid robotics, past, present state, future. Director Robotics Center. Mihailo Pupin
Inst. 2006,11000, 13–27.
9.
Vukobratovi´c, M. Active exoskeletal systems and beginning of the development of humanoid robotics.
Facta Univ.-Ser. Mech. Autom. Control. Robot. 2008,7, 243–262.
Sensors 2020,20, 3620 19 of 23
10.
Shajahan, J.A.; Jain, S.; Joseph, C.; Keerthipriya, G.; Raja, P.K. Target detecting defence humanoid sniper.
In Proceedings of the 2012 Third International Conference on Computing, Communication and Networking
Technologies (ICCCNT’12), Coimbatore, India, 26 July 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1–6.
11.
Alladi, T.; Chamola, V.; Sikdar, B.; Choo, K.K. Consumer iot: Security vulnerability case studies and solutions.
IEEE Consum. Electron. Mag. 2020,9, 17–25.
12.
Hassija, V.; Chamola, V.; Saxena, V.; Jain, D.; Goyal, P.; Sikdar, B. A Survey on IoT Security:
Application Areas, Security Threats, and Solution Architectures. IEEE Access
2019
,7, 82721–82743,
doi:10.1109/access.2019.2924045.
13.
Alladi, T.; Chamola, V.; Zeadally, S. Industrial Control Systems: Cyberattack trends and countermeasures.
Comput. Commun. 2020,155, 1–8, doi:10.1016/j.comcom.2020.03.007.
14.
Luo, R.C.; Chang, C.-C. Multisensor Fusion and Integration: A Review on Approaches and Its Applications
in Mechatronics. IEEE Trans. Ind. Inf. 2011,8, 49–60, doi:10.1109/TII.2011.2173942.
15.
Novak, D.; Riener, R. A survey of sensor fusion methods in wearable robotics. Robot. Auton. Syst.
2015
,73,
155–170, doi:10.1016/j.robot.2014.08.012.
16.
Wolpaw, J.R.; Birbaumer, N.; Heetderks, W.; McFarland, D.; Peckham, P.; Schalk, G.; Donchin, E.; Quatrano, L.;
Robinson, C.; Vaughan, T. Brain-computer interface technology: a review of the first international meeting.
IEEE Trans. Rehabil. Eng. 2000,8, 164–173, doi:10.1109/tre.2000.847807.
17.
Fabiani, G.; McFarland, D.; Wolpaw, J.R.; Pfurtscheller, G. Conversion of EEG Activity Into Cursor
Movement by a Brain–Computer Interface (BCI). IEEE Trans. Neural Syst. Rehabil. Eng.
2004
,12, 331–338,
doi:10.1109/tnsre.2004.834627.
18.
Minguillon, J.; Lopez-Gordo, M.A.; Pelayo, F. Trends in EEG-BCI for daily-life: Requirements for artifact
removal. Biomed. Signal Process. Control. 2017,31, 407–418, doi:10.1016/j.bspc.2016.09.005.
19.
Abdulkader, S.N.; Atia, A.; Mostafa, M.-S. Brain computer interfacing: Applications and challenges.
Egypt. Inf. J. 2015,16, 213–230, doi:10.1016/j.eij.2015.06.002.
20.
Gao, X.; Xu, D.; Cheng, M.; Gao, S. A bci-based environmental controller for the motion-disabled. IEEE Trans.
Neural Syst. Rehabil. Eng. 2003,11, 137–140, doi:10.1109/tnsre.2003.814449.
21.
Rebsamen, B.; Burdet, E.; Guan, C.; Zhang, H.; Teo, C.L.; Zeng, Q.; Laugier, C.; Ang, M. Controlling a
Wheelchair Indoors Using Thought. IEEE Intell. Syst. 2007,22, 18–24, doi:10.1109/MIS.2007.26.
22.
Reuderink B. Games and Brain-Computer Interfaces: The State of the Art; WP2 BrainGain Deliverable, HMI,
University of Twente (September 2008); Netherlands, 1-11. .
23.
Finke, A.; Lenhardt, A.; Ritter, H. The MindGame: A P300-based brain–computer interface game.
Neural Networks 2009,22, 1329–1333, doi:10.1016/j.neunet.2009.07.003.
24.
Li, W.; Jaramillo, C.; Li, Y. Development of mind control system for humanoid robot through a brain
computer interface. In Proceedings of the 2012 Second International Conference on Intelligent System Design
and Engineering Application, Sanya, Hainan, China, 6–7 January 2012; IEEE: Piscataway, NJ, USA, 2012;
pp. 679–682).
25.
Millán, J.D.; Rupp, R.; Müller-Putz, G.; Murray-Smith, R.; Giugliemma, C.; Tangermann, M.; Vidaurre, C.;
Cincotti, F.; Kubler, A.; Leeb, R.; et al. Combining brain–computer interfaces and assistive technologies:
state-of-the-art and challenges. Front. Mol. Neurosci. 2010,4, 161.
26.
Cortes, A.M.; Manyakov, N.V.; Chumerin, N.; Van Hulle, M.M. Language Model Applications to Spelling
with Brain-Computer Interfaces. Sensors 2014,14, 5967–5993, doi:10.3390/s140405967.
27.
Gomez-Gil, J.; San-Jose-Gonzalez, I.; Nicolas-Alonso, L.F.; Alonso-Garcia, S. Steering a Tractor by Means of
an EMG-Based Human-Machine Interface. Sensors 2011,11, 7110–7126, doi:10.3390/s110707110.
28.
Wang, F.; Zhang, X.; Fu, R.; Sun, G. Study of the Home-Auxiliary Robot Based on BCI. Sensors
2018
,18, 1779,
doi:10.3390/s18061779.
29.
Ahn, M.; Lee, M.; Choi, J.; Jun, S.C. A Review of Brain-Computer Interface Games and an Opinion Survey
from Researchers, Developers and Users. Sensors 2014,14, 14601–14633, doi:10.3390/s140814601.
30.
Sung, Y.; Cho, K.; Um, K. A Development Architecture for Serious Games Using BCI (Brain Computer
Interface) Sensors. Sensors 2012,12, 15671–15688, doi:10.3390/s121115671.
31.
Schalk, G.; McFarland, D.; Hinterberger, T.; Birbaumer, N.; Wolpaw, J.R. BCI2000: A General-Purpose
Brain-Computer Interface (BCI) System. IEEE Trans. Biomed. Eng.
2004
,51, 1034–1043,
doi:10.1109/tbme.2004.827072.
Sensors 2020,20, 3620 20 of 23
32.
Chae, Y.; Jeong, J.; Jo, S. Toward Brain-Actuated Humanoid Robots: Asynchronous Direct Control Using an
EEG-Based BCI. IEEE Trans. Robot. 2012,28, 1131–1144, doi:10.1109/TRO.2012.2201310.
33.
Güneysu, A.; Akin, H.L. An SSVEP based BCI to control a humanoid robot by using portable EEG device.
In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and
Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 6905–6908.
34.
Zander, T.O.; Kothe, C.; Jatzev, S.; Gaertner, M. Enhancing Human-Computer Interaction with Input from
Active and Passive Brain-Computer Interfaces. In Evaluating User Experience in Games; Springer: London,
UK, 2010; pp. 181–199.
35.
Shenoy, P.; Krauledat, M.; Blankertz, B.; Rao, R.P.N.; Müller, K.-R. Towards adaptive classification for BCI.
J. Neural Eng. 2006,3, R13–R23, doi:10.1088/1741-2560/3/1/r02.
36.
Lee, M.-H.; Fazli, S.; Mehnert, J.; Lee, S.-W. Subject-dependent classification for robust idle state detection
using multi-modal neuroimaging and data-fusion techniques in BCI. Pattern Recognit.
2015
,48, 2725–2737,
doi:10.1016/j.patcog.2015.03.010.
37.
Bansal, G.; Chamola, V.; Narang, P.; Kumar, S.; Raman, S. Deep3DSCan: Deep residual network and
morphological descriptor based framework for lung cancer classification and 3D segmentation. IET Image
Process. 2020,14, 1240–1247, doi:10.1049/iet-ipr.2019.1164.
38.
Chamola, V.; Hassija, V.; Gupta, V.; Guizani, M. A Comprehensive Review of the COVID-19 Pandemic and
the Role of IoT, Drones, AI, Blockchain, and 5G in Managing Its Impact. IEEE Access 2020,8, 90225–90265.
39.
Hassija, V.; Gupta, V.; Garg, S.; Chamola, V. Traffic Jam Probability Estimation Based on Blockchain and
Deep Neural Networks. IEEE Trans. Intell. Transp. Syst. 2020, 1–10, doi:10.1109/tits.2020.2988040.
40.
Hong, K.-S.; Khan, M.J. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy
and Increased Number of Commands: A Review. Front. Neurorobot.
2017
,11, doi:10.3389/fnbot.2017.00035.
41.
Choi, B.; Jo, S. A Low-Cost EEG System-Based Hybrid Brain-Computer Interface for Humanoid Robot
Navigation and Recognition. PLoS ONE 2013,8, e74583, doi:10.1371/journal.pone.0074583.
42.
Fazli, S.; Dähne, S.; Samek, W.; Bieszmann, F.; Müller, K.-R.; Biebmann, F. Learning From More Than
One Data Source: Data Fusion Techniques for Sensorimotor Rhythm-Based Brain—Computer Interfaces.
In Proceedings of the Proceedings of the IEEE; IEEE: Piscataway, NJ, USA, 2015; Volume 103, pp. 891–906.
43.
Pfurtscheller, G.; Allison, B.Z.; Brunner, C.; Bauernfeind, G.; Escalante, T.S.; Scherer, R.; Zander, T.O.;
Mueller-Putz, G.; Neuper, C.; Birbaumer, N. The Hybrid BCI. Front. Mol. Neurosci.
2010
,4,
doi:10.3389/fnpro.2010.00003.
44.
Aswath, S.; Tilak, C.K.; Suresh, A.; Udupa, G. Human Gesture Recognition for Real-Time Control of
Humanoid Robot. Int’l Journal of Advances in Mechanical & Automobile Engg., India, Volume: 1, Issue 1,
1-5,
45.
Yun, S.-J.; Lee, M.-C.; Cho, S.-B. P300 BCI based planning behavior selection network for humanoid robot
control. In Proceedings of the 2013 Ninth International Conference on Natural Computation (ICNC),
Shenyang, China, 23–25 July 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 354–358.
46.
Horki, P.; Solis-Escalante, T.; Neuper, C.; Müller-Putz, G.R. Combined motor imagery and SSVEP based BCI
control of a 2 DoF artificial upper limb. Med Boil. Eng. 2011,49, 567–577, doi:10.1007/s11517-011-0750-2.
47.
Ramadan, R.A.; Vasilakos, A.V. Brain computer interface: control signals review. Neurocomputing
2017
,223,
26–44, doi:10.1016/j.neucom.2016.10.024.
48.
Guger, C.; Daban, S.; Sellers, E.; Holzner, C.; Krausz, G.; Carabalona, R.; Gramatica, F.; Edlinger, G. How many
people are able to control a P300-based brain–computer interface (BCI)? Neurosci. Lett.
2009
,462, 94–98,
doi:10.1016/j.neulet.2009.06.045.
49.
Mellinger, J.; Schalk, G.; Braun, C.; Preissl, H.; Rosenstiel, W.; Birbaumer, N.; Kübler, A. An MEG-based
brain–computer interface (BCI). NeuroImage 2007,36, 581–93, doi:10.1016/j.neuroimage.2007.03.019.
50.
Müller-Putz, G.; Scherer, R.; Brunner, C.; Leeb, R.; Pfurtscheller, G. Better than random: a closer look on BCI
results. Int. J. Bioelectromagn. 2008,10, 52–55.
51.
Ebenuwa, S.H.; Sharif, M.S.; Alazab, M.; Al-Nemrat, A. Variance Ranking Attributes Selection
Techniques for Binary Classification Problem in Imbalance Data. IEEE Access
2019
,7, 24649–24666,
doi:10.1109/access.2019.2899578.
52.
Lotte, F.; Congedo, M.; Lecuyer, A.; Lamarche, F.; Arnaldi, B. A review of classification algorithms for
EEG-based brain–computer interfaces. J. Neural Eng. 2007,4, R1–R13, doi:10.1088/1741-2560/4/2/r01.
Sensors 2020,20, 3620 21 of 23
53.
Müller, K.R.; Krauledat, M.; Dornhege, G.; Curio, G.; Blankertz, B. Machine learning techniques for
brain-computer interfaces. Biomed. Tech. 2004,49, 11–22.
54.
Müller, K.-R.; Tangermann, M.; Dornhege, G.; Krauledat, M.; Curio, G.; Blankertz, B. Machine learning for
real-time single-trial EEG-analysis: From brain–computer interfacing to mental state monitoring. J. Neurosci.
Methods 2008,167, 82–90, doi:10.1016/j.jneumeth.2007.09.022.
55.
Krusienski, D.J.; Sellers, E.W.; Cabestaing, F.; Bayoudh, S.; McFarland, D.; Vaughan, T.M.; Wolpaw,
J.R. A comparison of classification techniques for the P300 Speller. J. Neural Eng.
2006
,3, 299–305,
doi:10.1088/1741-2560/3/4/007.
56.
Bi, L.; Fan, X.-A.; Liu, Y. EEG-Based Brain-Controlled Mobile Robots: A Survey. IEEE Trans. Human-Machine
Syst. 2013,43, 161–176, doi:10.1109/tsmcc.2012.2219046.
57.
Subasi, A.; Gursoy, M.I. EEG signal classification using PCA, ICA, LDA and support vector machines.
Expert Syst. Appl. 2010,37, 8659–8666, doi:10.1016/j.eswa.2010.06.065.
58.
Millan, J.D.R.; Mouriño, J. Asynchronous bci and local neural classifiers: an overview of the adaptive brain
interface project. IEEE Trans. Neural Syst. Rehabil. Eng. 2003,11, 159–161, doi:10.1109/tnsre.2003.814435.
59.
Sturm, I.; Lapuschkin, S.; Samek, W.; Müller, K.-R. Interpretable deep neural networks for single-trial EEG
classification. J. Neurosci. Methods 2016,274, 141–145, doi:10.1016/j.jneumeth.2016.10.008.
60.
Kaper, M.; Meinicke, P.; Grossekathoefer, U.; Lingner, T.; Ritter, H. BCI Competition 2003—Data Set IIb:
Support Vector Machines for the P300 Speller Paradigm. IEEE Trans. Biomed. Eng.
2004
,51, 1073–1076,
doi:10.1109/tbme.2004.826698.
61.
Kawanabe, M.; Krauledat, M.; Blankertz, B. A Bayesian Approach for Adaptive BCI Classification; In Proceedings
of 3rd International Brain-Computer Interface Workshop and Training Course, Austria, 2006, 1-2.
62.
Gouaillier, D.; Hugel, V.; Blazevic, P.; Kilner, C.; Monceaux, J.; Lafourcade, P.; Marnier, B.; Serre, J.; Maisonnier,
B. The nao humanoid: a combination of performance and affordability. arXiv 2008, arXiv:0807.3223.
63.
Kaneko, K.; Kanehiro, F.; Kajita, S.; Hirukawa, H.; Kawasaki, T.; Hirata, M.; Akachi, K.; Isozumi, T.T.
Humanoid robot HRP-2. 2004. In Proceedings of ICRA 2004 IEEE International Conference on Robotics and
Automation 2004, New Orleans, LA, USA, 26 April–1 May 2004; Volume 2, pp. 1083–1090.
64.
Ha, I.; Tamura, Y.; Asama, H.; Han, J.; Hong, D.W. September. Development of open humanoid platform
DARwIn-OP. In Proceedings of the SICE Annual Conference 2011, Tokyo, Japan, 13–18 September 2011;
IEEE: Piscataway, NJ, USA, 2011; pp. 2178–2181.
65.
Wirth, C.; Toth, J.; Arvaneh, M. “You Have Reached Your Destination”: A Single Trial EEG Classification
Study. Front. Mol. Neurosci. 2020,14, 66, doi:10.3389/fnins.2020.00066.
66.
Spataro, R.; Chella, A.; Allison, B.; Giardina, M.; Sorbello, R.; Tramonte, S.; Guger, C.; La Bella, V.
Reaching and grasping a glass of water by locked-in ALS patients through a BCI-controlled humanoid robot.
Front. Hum. Neurosci. 2017,11, 68.
67.
Farwell, L.; Donchin, E. Talking off the top of your head: toward a mental prosthesis utilizing event-related
brain potentials. Electroencephalogr. Clin. Neurophysiol.
1988
,70, 510–523, doi:10.1016/0013-4694(88)90149-6.
68.
Saduanov, B.; Alizadeh, T.; An, J.; Abibullaev, B. Trained by demonstration humanoid robot controlled via a
BCI system for telepresence. In Proceedings of the 2018 6th International Conference on Brain-Computer
Interface (BCI), GangWon, Korea, 15–17 January 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–4.
69.
Chella, A.; Pagello, E.; Menegatti, E.; Sorbello, R.; Anzalone, S.M.; Cinquegrani, F.; Tonin, L.; Piccione,
F.; Prifitis, K.; Blanda, C.; et al. A BCI Teleoperated Museum Robotic Guide. In Proceedings of the 2009
International Conference on Complex, Intelligent and Software Intensive Systems, Fukuoka, Japan, 16–19
March 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 783–788.
70.
Sorbello, R.; Tramonte, S.; Giardina, M.E.; La Bella, V.; Spataro, R.; Allison, B.Z.; Guger, C.; Chella, A.
A Human–Humanoid Interaction Through the Use of BCI for Locked-In ALS Patients Using Neuro-Biological
Feedback Fusion. IEEE Trans. Neural Syst. Rehabil. Eng. 2017,26, 487–497, doi:10.1109/tnsre.2017.2728140.
71.
Alimardani, M.; Nishio, S.; Ishiguro, H. The Importance of Visual Feedback Design in BCIs; From
Embodiment to Motor Imagery Learning. PLoS ONE
2016
,11, e0161945, doi:10.1371/journal.pone.0161945.
72.
Tidoni, E.; Gergondet, P.; Kheddar, A.; Aglioti, S.M. Audio-visual feedback improves the BCI performance in
the navigational control of a humanoid robot. Front. Neurorobot. 2014,8, doi:10.3389/fnbot.2014.00020.
73.
Nam, Y.; Koo, B.; Cichocki, A.; Choi, S. GOM-Face: GKP, EOG, and EMG-Based Multimodal Interface
With Application to Humanoid Robot Control. IEEE Trans. Biomed. Eng.
2014
,61, 453–462,
doi:10.1109/tbme.2013.2280900.
Sensors 2020,20, 3620 22 of 23
74.
Zhang, H.; Jolfaei, A.; Alazab, M. A Face Emotion Recognition Method Using Convolutional Neural Network
and Image Edge Computing. IEEE Access 2019,7, 159081–159089, doi:10.1109/access.2019.2949741.
75.
Petit, D.; Gergondet, P.; Cherubini, A.; Meilland, M.; Comport, A.I.; Kheddar, A. Navigation assistance for a
BCI-controlled humanoid robot. In Proceedings of the 4th Annual IEEE International Conference on Cyber
Technology in Automation, Control and Intelligent, Hong Kong, China, 4–7 June 2014; IEEE: Piscataway, NJ,
USA, 2014; pp. 246–251.
76.
Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: part I. IEEE Robot. Autom. Mag.
2006
,
13, 99–110, doi:10.1109/MRA.2006.1638022.
77.
Gergondet, P.; Kheddar, A.; Hintermüller, C.; Guger, C.; Slater, M. Multitask Humanoid Control with
a Brain-Computer Interface: User Experiment with HRP-2. In Experimental Robotics; Springer: Berlin,
Germany, 2012.
78.
Weisz, J.; Elvezio, C.; Allen, P.K. A user interface for assistive grasping. In Proceedings of the 2013 IEEE/RSJ
International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; IEEE:
Piscataway, NJ, USA, 2013; pp. 3216–3221.
79.
Ça˘glayan, O.; Arslan, R.B. Humanoid robot control with SSVEP on embedded system. In Proceedings of the
5th International Brain-Computer Interface Meeting: Defining the Future; 2013;, Taylor & Francis Conference,
California, USA, pp. 260–261.
80.
Hochberg, L.R.; Bacher, D.; Jarosiewicz, B.; Masse, N.Y.; Simeral, J.D.; Vogel, J.; Haddadin, S.; Liu, J.;
Cash, S.S.; Van Der Smagt, P.; et al. Reach and grasp by people with tetraplegia using a neurally controlled
robotic arm. Nature 2012,485, 372–375, doi:10.1038/nature11076.
81.
Escolano, C.; Antelis, J.M.; Minguez, J. A Telepresence Mobile Robot Controlled With a Noninvasive
Brain–Computer Interface. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics)
2011
,42, 793–804,
doi:10.1109/TSMCB.2011.2177968.
82.
Zhao, J.; Li, W.; Mao, X.; Hu, H.; Niu, L.; Chen, G. Behavior-Based SSVEP Hierarchical Architecture for
Telepresence Control of Humanoid Robot to Achieve Full-Body Movement. IEEE Trans. Cogn. Dev. Syst.
2017,9, 197–209, doi:10.1109/tcds.2016.2541162.
83.
Beraldo, G.; Antonello, M.; Cimolato, A.; Menegatti, E.; Tonin, L. Brain-Computer Interface Meets
ROS: A Robotic Approach to Mentally Drive Telepresence Robots. In Proceedings of the 2018 IEEE
International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018;
IEEE: Piscataway, NJ, USA, 2018; pp. 1–6.
84.
Aznan, N.K.N.; Connolly, J.D.; Al Moubayed, N.; Breckon, T.P. Using Variable Natural Environment
Brain-Computer Interface Stimuli for Real-time Humanoid Robot Navigation. In Proceedings of the 2019
International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019;
IEEE: Piscataway, NJ, USA, 2019; pp. 4889–4895.
85.
Zhao, J.; Li, W.; Li, M. Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of
Humanoid Robots. PLoS ONE 2015,10, e0142168, doi:10.1371/journal.pone.0142168.
86.
Thobbi, A.; Kadam, R.; Sheng, W. Achieving remote presence using a humanoid robot controlled by a
non-invasive BCI device. Int. J. Artif. Intell. Mach. Learn. 2010,10, 41–45.
87.
Leeb, R.; Tonin, L.; Rohm, M.; Desideri, L.; Carlson, T.; Millan, J.D.R. Towards Independence: A BCI
Telepresence Robot for People With Severe Motor Disabilities. Proc. IEEE 2015,103, 969–982.
88.
Escolano, C.; Antelis, J.; Mínguez, J. Human brain-teleoperated robot between remote places. In Proceedings
of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009;
IEEE: Piscataway, NJ, USA, 2009; pp. 4430–4437.
89.
Stawicki, P.; Gembler, F.; Volosyak, I. Driving a Semiautonomous Mobile Robotic Car Controlled by an
SSVEP-Based BCI. Comput. Intell. Neurosci. 2016,2016, 1–14, doi:10.1155/2016/4909685.
90.
Ma, J.; Zhang, Y.; Cichocki, A.; Matsuno, F. A Novel EOG/EEG Hybrid Human–Machine Interface Adopting
Eye Movements and ERPs: Application to Robot Control. IEEE Trans. Biomed. Eng.
2015
,62, 876–889,
doi:10.1109/tbme.2014.2369483.
91.
Kim, B.H.; Kim, M.; Jo, S. Quadcopter flight control using a low-cost hybrid interface with EEG-based
classification and eye tracking. Comput. Boil. Med. 2014,51, 82–92, doi:10.1016/j.compbiomed.2014.04.020.
92.
Stawicki, P.; Gembler, F.; Rezeika, A.; Volosyak, I. A Novel Hybrid Mental Spelling Application Based on
Eye Tracking and SSVEP-Based BCI. Brain Sci. 2017,7, 35, doi:10.3390/brainsci7040035.
Sensors 2020,20, 3620 23 of 23
93.
Dong, X.; Wang, H.; Chen, Z.; Shi, B.E.; B.E., S. Hybrid Brain Computer Interface via Bayesian integration
of EEG and eye gaze. In Proceedings of the 2015 7th International IEEE/EMBS Conference on Neural
Engineering (NER), Montpellier, France, 22–24 April 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 150–153.
94.
Nam, Y.; Zhao, Q.; Cichocki, A.; Choi, S. Tongue-Rudder: A Glossokinetic-Potential-Based Tongue–Machine
Interface. IEEE Trans. Biomed. Eng. 2011,59, 290–299, doi:10.1109/TBME.2011.2174058.
95.
Navarro, R.B.; Boquete, L.; Mazo, M.; Lopez, E.; L., B.; M., M.; Elena, L. System for assisted mobility using
eye movements based on electrooculography. IEEE Trans. Neural Syst. Rehabil. Eng.
2002
,10, 209–218,
doi:10.1109/tnsre.2002.806829.
96.
Tsui, C.S.L.; Jia, P.; Gan, J.Q.; Hu, H.; Yuan, K. EMG-based hands-free wheelchair control with EOG attention
shift detection. In Proceedings of the 2007 IEEE International Conference on Robotics and Biomimetics
(ROBIO), Sanya, China, 15–18 December 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 1266–1271.
97.
Usakli, A.B.; Gürkan, S.; Aloise, F.; Vecchiato, G.; Babiloni, F. A hybrid platform based on EOG and
EEG signals to restore communication for patients afflicted with progressive motor neuron diseases.
In Proceedings of the 2009 Annual International Conference of the IEEE Engineering in Medicine and
Biology Society, Minneapolis, MN, USA, 3–6 September 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 543–546.
98.
Postelnicu, C.-C.; Girbacia, F.; Talaba, D. EOG-based visual navigation interface development. Expert Syst.
Appl. 2012,39, 10857–10866, doi:10.1016/j.eswa.2012.03.007.
99.
Ramli, R.; Arof, H.; Ibrahim, F.; Mokhtar, N.; Idris, M.Y.I. Using finite state machine and a hybrid of EEG
signal and EOG artifacts for an asynchronous wheelchair navigation. Expert Syst. Appl.
2015
,42, 2451–2463,
doi:10.1016/j.eswa.2014.10.052.
100.
Martens, N.; Jenke, R.; Abu-Alqumsan, M.; Kapeller, C.; Hintermüller, C.; Guger, C.; Peer, A.; Buss, M.
Towards robotic re-embodiment using a Brain-and-Body-Computer Interface. In Proceedings of the 2012
IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Portugal, 7–12 October
2012; IEEE: Piscataway, NJ, USA, 2012; pp. 5131–5132.
101.
Acar, D.; Miman, M.; Akirmak, O.O. Treatment of anxiety disorders patients through eeg and augmented
reality. Eur. Soc. Sci. Res. J. 2014,3, 18–27.
102.
Lenhardt, A.; Ritter, H. An Augmented-Reality Based Brain-Computer Interface for Robot Control.
In International Conference on Neural Information Processing; Springer: Berlin/Heidelberg, Germany, 2010;
pp. 58–65.
103.
Takano, K.; Hata, N.; Kansaku, K. Towards Intelligent Environments: An Augmented Reality–Brain–Machine
Interface Operated with a See-Through Head-Mount Display. Front. Mol. Neurosci.
2011
,5,
doi:10.3389/fnins.2011.00060.
104.
Faller, J.; Allison, B.Z.; Brunner, C.; Scherer, R.; Schmalstieg, D.; Pfurtscheller, G.; Neuper, C. A feasibility
study on SSVEP-based interaction with motivating and immersive virtual and augmented reality. arXiv
2017
,
arXiv:1701.03981.
105.
Faller, J.; Leeb, R.; Pfurtscheller, G.; Scherer, R. Avatar navigation in virtual and augmented reality
environments using an ssvep bci icabb-2010. In Proceedings of the Brain-Computer Interfacing and Virtual
Reality Workshop W; 2010; Venice, Italy. Volume 1.
106.
Kerous, B.; Liarokapis, F. BrainChat—A Collaborative Augmented Reality Brain Interface for Message
Communication. In Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented
Reality (ISMAR-Adjunct), Nantes, France, 9–13 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 279–283.
c
2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).