PreprintPDF Available

Evaluating a Semi-Autonomous Brain-Computer Interface Based on Conformal Geometric Algebra and Artificial Vision

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

In this paper, we evaluate a semi-autonomous brain-computer interface (BCI) for manipulation tasks. In such system, the user controls a robotic arm through motor imagery commands. In traditional process-control BCI systems, the user has to provide those commands continuously in order manipulate the effector of the robot step-by-step, which results in a tiresome process for simple tasks such as pick and replace an item from a surface. Here, we take a semi-autonomous approach based on a conformal geometric algebra model that solves the inverse kinematics of the robot on the fly, then the user only has to decide on the start of the movement and the final position of the effector (goal-selection approach). Under these conditions, we implemented pick-and-place tasks with a disk as an item and two target areas placed on the table at arbitrary positions. An artificial vision (AV) algorithm was used to obtain the positions of the items expressed in the robot frame through images captured with a webcam. Then, the AV algorithm is integrated to the inverse kinematics model to perform the manipulation tasks. As proof-of-concept, different users were trained to control the pick-and-place tasks through the process-control and semi-autonomous goal-selection approaches, so that the performance of both schemes could be compared. Our results show the superiority in performance of the semi-autonomous approach, as well as evidence of less mental fatigue with it.
Content may be subject to copyright.
Manuscript accepted for publication in Computational Intelligence and Neuroscience
Special Issue on Ergonomic Issues in Brain-Computer Interface Technologies: Current Status,
Challenges, and Future Direction
Evaluating a Semi-Autonomous Brain-Computer Interface Based
on Conformal Geometric Algebra and Artificial Vision
M. A. Ramirez-Moreno and D. Guti´errez
Centro de Investigaci´on y de Estudios Avanzados (CINVESTAV),
Unidad Monterrey, Apodaca, N. L. , 66600, M´exico.
Corresponding author:
D. Guti´errez, Ph.D.
Centro de Investigaci´on y de Estudios Avanzados (CINVESTAV)
V´ıa del Conocimiento 201, Parque de Investigaci´on e Innovaci´on Tecnol´ogica (PIIT)
Autopista al Aeropuerto Km. 9.5, Lote 1, Manzana 29
Apodaca, N. L. , 66600, M´exico
Tel: (+52-81) 1156-1740 x 4513
Fax: (+52-81) 1156-1741
E-mail: dgtz@ieee.org
arXiv:1910.14109v1 [eess.SP] 30 Oct 2019
Abstract
In this paper, we evaluate a semi-autonomous brain-computer interface (BCI) for
manipulation tasks. In such system, the user controls a robotic arm through motor
imagery commands. In traditional process-control BCI systems, the user has to provide
those commands continuously in order manipulate the effector of the robot step-by-
step, which results in a tiresome process for simple tasks such as pick and replace an
item from a surface. Here, we take a semi-autonomous approach based on a conformal
geometric algebra model that solves the inverse kinematics of the robot on the fly, then
the user only has to decide on the start of the movement and the final position of the
effector (goal-selection approach). Under these conditions, we implemented pick-and-
place tasks with a disk as an item and two target areas placed on the table at arbitrary
positions. An artificial vision (AV) algorithm was used to obtain the positions of the
items expressed in the robot frame through images captured with a webcam. Then, the
AV algorithm is integrated to the inverse kinematics model to perform the manipulation
tasks. As proof-of-concept, different users were trained to control the pick-and-place
tasks through the process-control and semi-autonomous goal-selection approaches, so
that the performance of both schemes could be compared. Our results show the su-
periority in performance of the semi-autonomous approach, as well as evidence of less
mental fatigue with it.
Keywords:
brain-computer interface; semi-autonomous interaction; confocal geometric algebra; artificial vision.
1
1 Introduction
A brain-computer interface (BCI) is a system that enables a real-time user-device communication
pathway through brain activity. Through the years, development and research on BCI has mainly
been oriented to the creation of rehabilitation systems, as well as systems that help disabled patients
regain to some extent their lost or diminished capabilities [1]. Some reported devices that have been
successfully controlled using BCIs are spellers, electric wheelchairs, robotic arms, electric prostheses
and humanoid robots [2, 3, 4, 5]. In BCI studies, the most common technique used to acquire brain
non-invasively is electroencephalography (EEG).
In order to manipulate the device through brain activity, the design of the BCI must include the
following stages: signal acquisition, filtering, feature extraction, classification, device modeling and
control [6]. During the filtering stage, unwanted noise and artifacts are removed from the signals
using temporal and spatial filters. Then, temporal or spatial features of interest are extracted from
the signals to build feature vectors. These vectors are formed by characteristic components of the
signals, which are then used in the classification stage to decipher user intention. Lastly, the device
is manipulated based on the result of the classification algorithm. Depending on the device and the
complexity of the system, a model of the system is needed to perform with precision the desired
tasks. BCIs can be divided into two groups based on their control strategy: process-control and
goal-selection. In the process-control strategy, users are continuously controlling each part of the
process. This is done by performing low-level commands continuously through the BCI, with no
additional assistance. On the other hand, in the goal-selection strategy, users are responsible for
selecting their desired goal and the system provides assistance to successfully perform the tasks with
minimum effort [7]. In this case, the user performs high-level tasks by sending simple commands
through the BCI.
Common paradigms used as control commands in BCI include steady state visual evoked po-
tentials (SSVEP), P300 waveform, and motor imagery (MI). SSVEP is a resonance phenomenon
occurring at occipital and parietal lobes as result of oscillatory visual stimulus presented to a user
at a constant frequency [8].The P300 is an EEG signal component that appears 300-ms after an
event of voluntary attention, and it is usually observed during visual or auditory stimulus presenta-
2
tion [9]. MI presents as an event-related desynchronization (ERD) found at the sensorimotor areas,
which generates a contra-lateral power decrease in a frequency range from 8-13 Hz (also known as
the µband) [10]. Controlling a BCI with SSVEP and P300 require less training in comparison to
MI, as the first represent an unvoluntary response to a stimulus. However, its use in BCI is limited
due to its requirement of a stimulus presentation device. The training process to control MI-based
BCIs (MI-BCI) might involve stimulus presentation as well. However, it can be excluded for its
final application on the BCI. Even though MI-BCIs require longer training periods, they are better
suited for close-to-real-life environments and self-paced BCIs [11].
Several studies present successfully implemented ERD based BCIs, most of them using a process-
control strategy [12, 13, 14]. Some goal-selection BCIs have been reported as well [15, 16]. In [7],
users were trained on process-control and goal-selection MI-BCIs to perform one-dimensional cur-
sor movements on a screen. The results suggests that users performing on goal-selection strategy
showed higher accuracy and faster learning in comparison to the process-control approach. However,
the authors state that a direct comparison of goal-selection and process-control in a more compli-
cated (real world) scenario has not yet been presented. In the proposed study, three-dimensional
object manipulation tasks through a robotic arm are implemented in a MI-BCI. The complexity of
the three-dimensional movements on real objects is higher than the one-dimensional movements on
virtual objects presented in [7]. In [17], a semi-autonomous BCI is implemented to manipulate a
robotic arm to perform tasks such as pouring a beverage inside a glass in a tray, through SSVEP. In
future research, similar tasks as in [17] could be implemented in our BCI using MI instead, allowing
a more natural execution of daily-life context tasks without the need of a stimulus presentation
screen.
In a typical process-control MI-BCI, the user controls the direction of the final effector of a
robotic arm through low-level commands, which means that the user has to maneuver the robot in
a three-dimensional space to reach for a desired target. Clearly, the user remains in a high attention
state during the maneuvers, as he/she is continuously aware of the final effector position during the
whole task. This continuous awareness might lead to generation of mental fatigue or frustration,
which is undesirable as it can directly affect user performance and learning [18]. The analysis
of P300 features, such as amplitude and latency, has been shown to be useful in identifying the
3
depth of cognitive information processing [19]. The amplitude of P300 waveform tends to decrease
when users encounter cognitive tasks with high difficulty [20]. On the other hand, P300 latency
has shown to increase when the stimulus is cognitively difficult to process [21]. Another study
has reported correlation between changes in the P300 component and BCI performance [22]. The
evidence provided by these studies might suggest that the analysis of P300 could be implemented
as a mental fatigue indicator during BCI training and control.
In order to diminish mental fatiguing in BCI systems, a semi-autonomous BCI using a goal-
selection strategy is here proposed. This system assists the user to perform a specific task by cal-
culating all the variables needed to successfully execute it. Some studies have previously presented
BCI designs focusing on this semi-autonomous approach with successful results on performance,
accuracy and comfort for the user [23, 17, 24]. Therefore, this paper presents the implementation
of a traditional low-level MI-BCI and a semi-autonomous MI-BCI designed to perform object ma-
nipulation tasks with a robotic arm. In the process-control strategy MI-BCI, the user commands
the final effector of the robot to move in a three-dimensional space to reach for a target placed on a
table. In the semi-autonomous MI-BCI, one small disk and two target areas are placed on a table.
Here, the robot reaches for the disk and places it on a specific target, which is selected by the user.
As proof-of-concept, two volunteers were trained on each BCI system and their performance was
evaluated and compared. A statistical P300 analysis was performed on all users in order to observe
mental fatigue differences induced by the operation of low-level and semi-autonomous BCIs.
In order to model the robot used in this experiment, a conformal geometric algebra (CGA)
model was implemented in both the traditional and semi-autonomous BCIs to solve the inverse
kinematics of the robotic arm, i.e., obtaining the joint angles needed for a specific position of
the final effector. Additionally, an artificial vision (AV) algorithm was integrated in the semi-
autonomous BCI in order to provide information about the positions of the items on the table
referenced to robot frame. As the implementation of the semi-autonomous BCI implies a higher
computational load, the CGA model was chosen for the solution of the inverse kinematics. CGA
has shown to represent an operation reduction and in some cases, a decrease in computational load
when compared to traditional inverse kinematics solution [25].
This paper is organized as follows: the CGA model and AV algorithm are described in Section 2,
4
and the design of both BCIs is explained in Section 3; evaluations on both algorithms and perfor-
mance results of users controlling both BCIs are presented in Section 4. Preliminary short reports
of the system’s implementation (but not its evaluation) have been presented at [26] and [27].
2 Robot modeling and artificial vision
In this section we describe each of the components required to compute the inverse kinematics of a
robotic arm by using CGA. Furthermore, here we explain in detail the AV algorithm used to obtain
the positions of the objects to be manipulated by the robot.
2.1 Conformal Geometric Algebra
Traditional methods to solve the inverse kinematics of robots include several matrix operations, as
well as many trigonometric expressions. All this can result in a quite complex solution depending
on the modeled robot [28]. In this study, a conformal geometric algebra (CGA) model is proposed
instead, as it is considered to be computationally lighter, easier to implement, and highly intuitive.
CGA has proved to be a powerful tool when solving the inverse kinematics of robotic arms [29, 30].
It also offers an operation reduction when compared to traditional methods and provides runtime
efficient solutions. More information on computational efficiency characteristics can be found in [31].
With this model, the joint angles of the robot are obtained for a specific position of the final
effector. In CGA, two new dimensions (e0, e) are defined, representing a point in the origin
and a point in the infinity, respectively, in addition to the three-dimensional Euclidean space
(e1, e2, e3) [29]. In this space, geometric entities (points, lines, circles, planes and spheres) and
calculations involving them (distances and intersections) can be represented with simple algebraic
equations.
Also, the geometric product between two vectors aand bis defined as a combination of the
inner product and the outer product:
ab =a·b+ab. (1)
5
The inner product is used to calculate distances between elements, and the outer product
generates a bivector, which is an element occupying the space spanned by both vectors. It is also
used find the intersection between two elements. The intersection Mof two geometric objects A
and Brepresented in CGA is given by M=AB, or M=A·B. The element Ais the dual
of Aand is expressed as
A=AI1
c,(2)
where I1
c=e0e3e2e1e, which allows for a change in representation of the same element. Standard
and dual representations of commonly used geometrical objects in CGA are shown in Table I. There,
xand nare points represented as a linear combination of the 3D base vectors:
x=x1e1+x2e2+x3e3.(3)
There are two possible representations of the same element, as shown in Table I. A circle can be
represented as the space spanned by three points in space, as well as the intersection of two spheres.
Also, a line can be expressed as the intersection of two planes, as well as the space spanned by two
points expanded to the infinity.
Making use of the previous equations and relationships, a CGA model to solve the inverse
kinematics of a manipulator robot was obtained following the proposed method in [32]. The modeled
robot was the Dynamixel AX-18A Smart robotic arm, which is a five degrees-of-freedom (DOF)
manipulator robot. Figure 1 shows the modeled robot as well as its joints and links. The DOF
of this robot correspond to its shoulder rotation, elbow flexion-extension, wrist flexion-extension,
wrist rotation, and hand open-close function [33]. The inverse kinematics solution was obtained for
joints J0,J2, and J3. For the particularities of the manipulation tasks, joints J4and J5were not
considered for simplicity.
2.2 Our CGA model
Next, we describe the required CGA model that we implemented specifically for our system.
6
2.2.1 Fixed joints and planes
The origin of the CGA model was located at joint J0, located at the center of the rotational base
of the robot, therefore J0=e0. Joint J1is also a fixed joint with constant position, found directly
above joint J0. The position for joint J1was defined as x1= [0,0,0.036]. Now, let us consider the
desired final effector position as a point in space xe. Then, a vertical plane πerepresenting the
direction of the final effector is described as
πe=e0e3xee,(4)
where e0represents the origin in robot frame and e3the Euclidian zaxis. As the position of the
final effector is used to define πe, the direction of plane changes consistently with xe. A plane πb,
representing the rotational base of the robot is defined as
πb=e0e1e2e,(5)
where e1and e2represent the Euclidean xand yaxes. Planes πeand πbare shown in Figure 2.
2.2.2 Calculation of joint’s position
In a kinematic chain model of a robotic arm using CGA, the implemented method to find joint Jn
is based on the intersection of two spheres centered at joints Jn1and Jn+1 with radii equal to the
lengths of the links connecting Jn1with Jn, and Jnwith Jn+1, respectively. The intersection of
both spheres results in a circle, which is then intersected to the plane of the final effector to obtain
a point pair representing two possible configurations for joint Jn. One point is then selected as Jn,
depending on the desired configuration. The process requires the following:
Spheres centered at point Pand with radius rare given by
s=P1
2r2e; (6)
7
There are two methods for creating a circle. We can either intersect two spheres sjand skby
c= (s
js
k),(7)
or we can intersect a plane πand a sphere sby
c= (πs); (8)
The intersection of a circle cand a plane πto create a point pair P p is given by
P p = (cπ); (9)
Finally, to obtain a point Pfrom P p, we have
P=P p ±pP p2
e·P p .(10)
Based on the previous expressions, and in order to find the position of joint J2in our modeled
robot, two spheres must be constructed, and they have to be centered at J1and J3. However, the
position of joint J3is yet unknown in our model. A similar situation occurs if the desired position
is instead, joint J3. In this particular case, xeis known but not J2. Given this situation, another
approach was implemented in order to find joint J2.
2.2.3 Position of joint J2
Using (6), sphere s1was centered at x1with radius equal to the length of link L2. Hence, in order
to find joint J2, another sphere shmust be intersected to s1. In order to construct sh, its center
must be defined. This is achieved by first creating an auxiliary sphere s0, centered at the origin
with radius Laequal to the horizontal component of the distance from J0to J2. This is valid as
the distance from J0to J2is constant for any position of the final effector xe.
8
Then, using (8), s0is intersected to plane πeto obtain circle c0. Next, using (9), c0is intersected
to plane πbto produce point pair P p0, from which one point is selected as xhusing (10). The
procedure to find point xh, which corresponds to the center of the desired sphere to be intersected
with s1, is shown in Figure 3.
Using (6), sphere shis centered at xhwith radius Lbequal to the vertical component of the
distance from J0to J2. Then, the intersection of spheres s1and shis given by (7), which results
in circle c2. Using (9), the intersection of c2with plane πerenders point pair P p2. Finally, the
position of J2is obtained from P p2given by (10). The whole procedure previously detailed to
obtain the position of joint J2is represented in Figure 4.
2.2.4 Position of joint J3
The procedure to find the position of joint J3is straight-forward once the position of joint J2was
calculated. For that, two spheres s2and seare defined using (6), centered at x2and xe, and with
radii equal to the length of link L3and L4, respectively. Both spheres are intersected to obtain
circle c3using (7). With (9), c3is then intersected to plane πeto obtain point pair P p3. From
P p3,J3is easily obtained using (10). A representation of the procedure to find joint J3is shown
in Figure 5.
2.2.5 Angle calculation
In order to calculate the angles formed by two vectors αand β, their corresponding unit vectors
are defined as ˆα=α/||α|| and ˆ
β=β/||β||. The normalized bivector spanning the space formed by
those vectors is expressed as
ˆ
N=±ˆαˆ
β
||ˆαˆ
β||.(11)
As explained in [32], the angle θbetween αand βis given by
θ= Atan2[(αβ)/ˆ
N , α ·β],(12)
9
where Atan2 corresponds to the four-quadrant inverse tangent. This operator gathers information
on the signs of its two arguments in order to return the appropriate quadrant of the computed
angle [34]. Such result is not possible to be obtained from the conventional single-argument arctan
function. Also note that the plus sign in (11) applies if the rotation from αto βis counter-clockwise,
while the minus sign applies in the opposite rotation.
In order to find the joint angles using (12), vectors formed by the links of the robot need to be
calculated. First, lines representing each link are defined:
l01 =e0J1e,(13)
l12 =J1J2e,(14)
l23 =J2J3e,(15)
and
l3e=J3Jee.(16)
The previous expressions define lines passing through links L1,L2,L3, and L4, respectively (see
Figure 1). L4was considered a straight line from joint J3to the final effector xe, i.e., we ignored
wrist-rotation and hand open-close joints.
In (12), the parameters αand βneed to be directional vectors for the purpose of computing
our joint angles. Therefore, the directional vectors of plane πe, as well as lines l23 and l3e, were
calculated, which represent the base and links of the robot, respectively. From a given line l, its
directional vector can be obtained as
(l·e0)·e,(17)
and the directional vector normal to a plane πis given by
(πe)·e0.(18)
Based on all the previously defined elements, the vectors involved in the calculation of joint
10
angles θk, for k= 0,2,3, are summarized in Table II. There, αand βin (11) and (12) are replaced
by αkand βk, respectively, to calculate θk. Please note that, as joint J1is fixed, θ1does not need
to be calculated.
2.3 Artificial Vision Algorithm
An AV algorithm was implemented to calculate the positions of items on a table, so the robotic
arm could perform the desired manipulation tasks. An ATW-1200 Acteck web camera was used to
record images at 30 fps with a resolution of 640 ×480 pixels. The acquired images were processed
and analyzed in real-time using the OpenCV library (www.opencv.org) from Python.
The robotic arm was fixed on a white table, centered at one end of it. A plane was delimited on
the table, defined as a 400 ×400 mm square. Four 30 ×30 mm square markers of different colors
(cyan, orange, magenta and yellow) were placed inside the delimited square, one at each corner. A
blue disk with height of 6 mm and radius of 13 mm was used as the item to be picked, while two
stickers with radius of 42 mm (green and red) were used to indicate target areas. The camera was
fixed in a high angle, so that all markers and items were inside its field of view. The setup of the
robotic arm and items in the table are shown in Figure 6.
In order to perform object manipulation tasks, the real-world coordinates of the plane (in
reference to robot frame) had to be obtained from the image coordinates obtained by the camera.
To achieve this, a homography transformation was performed on the acquired images. In general, a
two-dimensional point (u, v) in an image can be represented as a three-dimensional vector (x, y, z)
by letting u=x/z and v=y/z. This is called the homogeneous representation of a point and it
lies on the projective plane P2[35]. A homography is an invertible mapping of points and lines on
the projective plane P2, thus allowing to obtain the real-world coordinates of features in an image
from its image coordinates.
In our case, the desired transformation is such the image obtained from the camera is turned
into a two-dimensional view of the same setup. In this transformation, the image shows a planar
representation of the original view, as if the camera was placed directly above the delimited square.
11
In order to obtain this representation, the following homography transformation was applied [35]:
u
v
=H
x
y
,(19)
where vectors [u v]Tand [x y]Trepresent the positions of selected points in the image and their
corresponding positions in real-world coordinates, respectively, H=K[R|t] is the homography
matrix that defines the desired perspective change to be performed on the image, Kis the calibration
matrix which contains the intrinsic parameters of the camera, while Rand tare, respectively, the
rotation matrix and translation vector applied on the camera in order to perform this transformation
view. In (19), zis ignored as all items are considered to be at z= 0.
In order to compute matrix H, both real-world and image coordinates of the centroids of
the square markers were obtained. First, markers were detected through color segmentation and
binarization, as shown in Figure 7(a). This process was performed separately on each marker and
their contours were detected. After that, the centroids of the markers in the image were calculated.
The contours and centroids of each marker are shown in Figure 7(b).
Since the markers have known dimensions (30 ×30 mm), the positions of their centroids in
real-world coordinates relative to the plane are known as well. These positions were defined as:
cyan at [15,15] mm, orange at [385,15] mm, magenta at [15,385] mm, and yellow at [385,385] mm,
all inside the 400 ×400 mm area available of the table. Then, both sets of coordinates are used
to obtain Hwith OpenCV’s command findHomography, and the resulting matrix is applied to
transform the image, as shown in Figure 7(c).
Then, using the same procedure as with the markers, the centroids of the disk and targets in
the new image were calculated. However, the reference frame from the image is different from the
reference frame from the robot. Therefore, the first was transformed by applying the following
rotation matrix:
R=
cos πsin π
sin πcos π
.(20)
12
Furthermore, a translation vector [200 400]Twas applied, as well as a sign switching of the
xaxis to obtain the desired positions. In robot frame, the xaxis of the delimited square goes
from 20 to 20, while the yaxis goes from 0 to 40, and the robot is located at the origin. After
applying all those transformations, the centroids of all items are finally expressed in robot frame,
and they can be detected by the AV system together with the contours of all items. This is shown
in Figure 8.
3 Implementation of BCI systems
As proof-of-concept, four participants volunteered in this study (two female and two males, with
average age of 22.25 years, SD= ±0.95). The experimental protocol was divided in three stages
for both the process-control and goal-selection BCIs: (i) training, (ii) cued manipulation, and
(iii) uncued manipulation. Both BCIs were MI-based, therefore users were trained to control the
corresponding µband desynchronization at will. In all trials, volunteers sat in front of a computer
screen first showing a black screen (baseline) in which the user was meant to be in a resting state.
Then, different types of stimulus were presented to the user, representing each a different command.
The duration for the baseline (15 seconds) and stimulus presentation (4 seconds) was the same for
all trials and stages. During stimulus presentation, users were expected to react accordingly, either
by imagining the movement of either left or right hand, or to remain in a resting state. In training
trials, EEG signals were acquired and analyzed off-line to build and evaluate the performance of
classifiers, which were then used on-line during the manipulation trials. In cued manipulation trials,
the user was expected to manipulate the device as indicated by the stimuli. On the other hand,
the user was encouraged to manipulate the device at will during uncued manipulation trials.
3.1 Training trials
The training protocol was identical for both the process-control and goal-selection BCIs. Three
types of stimuli were presented to the user: right hand imaginary movement (RHIM), left hand
imaginary movement (LHIM), and rest. A total of 30 stimuli (10 for each command) were randomly
presented to the user. Stimuli were represented in the computer screen with a red arrow pointing to
13
the right (for RHIM), pointing to the left (for LHIM), and a black screen for rest. A 2-seconds green
cross appeared before all stimuli as a pre-stimulus, and there was a variable inter-stimulus resting
period of 2-4 seconds between stimuli. Users underwent three training sessions on different days,
each comprised by five repetitions of the mentioned experimental protocol, while EEG recordings
were obtained.
3.2 Signal acquisition
EEG signals were recorded with the Mobita equipment from TMSi systems, using a measuring cap
of 19-channels: FP1, FP2, F3, F4, C3, C4, P3, P4, O1, O2, F7, F8, T3, T4, T5, T6, Cz, Fz, and
Pz. Impedance of all electrodes was set below 5 kΩ for all experiments. Signals were acquired with
a sampling frequency of 1000 Hz. Recordings were band-pass filtered with a fourth order 1-100 Hz
Butterworth, and a 60 Hz Notch filter to eliminate power line interference. The OpenVibe software
was used for the BCI design and implementation. More information about this software can be
found in [36].
3.3 Classification algorithm
Feature extraction was performed using the BCI2000 offline analysis tool (https://www.bci2000.
org/mediawiki/index.php/User Reference:BCI2000 Offline Analysis), where the r2value calculated.
A higher r2value is related to a higher discrimination of a signal under two stimulus conditions.
More details about the statistic meaning of r2can be found at https://www.bci2000.org/mediawiki/
index.php/Glossary. After each training session, signals from the five training trials were used to
calculate r2. Three r2maps (one per stimulus combination) were obtained per training session,
showing the r2values in the 19 available channels and frequencies ranging from 1 to 70 Hz. Each
map, as the one shown in Figure 9, represents the channels and frequencies which, for a specific
combination of conditions, showed higher discrimination. Through this procedure, the selected
channels and frequencies were used as features for the classification algorithm.
Signals were spatially filtered using a Laplacian filter on the selected channels, as well as through
a fourth-order Butterworth band-pass filter tuned to the selected frequencies. Power values were
14
then obtained from the filtered signals to build the feature vectors, which then became the input for
a linear discriminant analysis (LDA) classifier, which separates data representing different classes
by finding a hyperplane which maximizes the distances between the means of the classes, while
minimizing the variance within classes [37].
In our case, three pair-wise classifiers per training session were obtained using this procedure:
LHIM versus RHIM, LHIM versus rest, and RHIM versus rest. The three classifiers were tested
online on the recorded signals to evaluate their performance as a percentage of correctly classified
stimuli. The classification was performed on each four-seconds stimulus, divided into overlapped
sub-epochs using a window function. Each four-seconds epoch was formed by 64 sub-epochs of two
seconds, separated by 0.0625 seconds. One pair-wise classifier labeled each sub-block as one of two
possible classes, and the four-seconds epoch was classified as the mode of the classification result for
all its sub-blocks. Then, one general classifier was built, based on the results of the three pair-wise
classifiers. Here, the four-seconds epoch of each stimuli was labeled as class I= 1,2,or 3 (LHIM,
rest, or RHIM, respectively), if two out of the three pair-wise classifiers labeled the same epoch
identically. The mean performance of the general classifiers across trials are shown in Table III for
all subjects and training sessions, as well as their selected features. After training sessions, each
user proceeded to perform the subsequent trials using the classifier with the highest performance
obtained at the last training session.
3.4 Process-control BCI
The process-control BCI was designed in such manner that users were able to perform three-
dimensional movements to complete reaching tasks. In this system, the position of the final effector
as well as the desired axis in which the effector moves, can be controlled through low-level MI-based
commands. To achieve this, the user has two choices: moving along a selected axis (y-axis at the
initial step) or change between axes. In the design of this BCI, the classification of a LHIM results
in a -10 mm displacement, while the classification of a RHIM results in a +10 mm displacement
on the selected axis. The classification of a rest event holds the position of the final effector with
no displacement. The consecutive classification of two rest events in a row allowed the user for a
change of axis. This change of axis takes place in the following sequence: yz,zx, and xy.
15
3.4.1 Cued manipulation
In these trials, users sat in front of a computer showing three windows on the screen. The first
window was used for stimulus presentation, the second was used to display in which axis the
movement of the robot took place, and the third was used visualize the robot and its movements.
The setup for these experiments is shown in Figure 10. After the baseline period, 15 random
stimuli (5 for each type) were presented to the user. Pre-stimulus, stimulus, and inter-stimulus
duration were the same as in training trials (see Section 3.1). After the stimulus was presented, the
user was expected to emit the instructed command through the BCI. Then, the robot performed
a specific movement based on the classification result. In these trials, performance was evaluated
as the percentage of correctly classified stimuli. The intention of these trials was to get the users
acquainted to the BCI, and they were performed immediately before the uncued manipulation
trials. Users performed three sessions on different days, each formed by three repetitions of this
protocol.
3.4.2 Uncued manipulation
The same screen display was used as in cued trials, but here subjects were asked to complete
reaching tasks on their own. At the start of each trial, the final effector was fixed at home position
[0,155.5,284.3] and a target was placed at [0,300,-49]. At this initial step, the distance of the final
effector to the target was 360 mm. Note that the target is placed at z=-49, as the robot base is
49 mm above the table. A baseline period was followed by the presentation of 20 stimuli showing
the word “Imagine”, in which the user was expected to emit MI commands through the BCI. The
duration of pre-stimulus, stimulus, and inter-stimulus periods were the same as in training trials
(see Section 3.1). The user was instructed to move the final effector as close as possible to the target
within the 20 stimuli, using the protocol described in Section 3.4. Performance was evaluated as
the percentage of stimuli where: the user moved the final effector closer to the target, and changed
successfully to the y-axis. Users performed three sessions on different days, each formed by five
repetitions of the described protocol.
16
3.5 Goal-selection BCI
The goal-selection BCI was designed to perform in a semi-autonomous way pick-and-place tasks
with the disk and two possible targets. Users were able to perform these tasks for any position of the
items (randomly chosen before a trial), inside of the robot workspace. The centroids C= [Cx, Cy]
of the two target stickers were calculated in these trials by the AV algorithm. In this case, the
classification of three types of events resulted in different manipulation tasks:
if an event was classified as RHIM, the robot reached for the disk, placed it on the target
located to the right (greater Cxcomponent), and returned to home position;
if an event was classified as LHIM, the robot reached for the disk, placed it on the target
located to the left (smaller Cxcomponent), and returned to home position;
if an event was classified as rest, the robot remained at home position.
After the robot performed a manipulation task, all the items in the table were manually changed
to random positions, in preparation for the next trial.
3.5.1 Cued manipulation trials
In these trials, the subject sat in front of a computer screen which showed two screens. The first one
was used for stimulus presentation, while the second was used to present the transformed image,
as shown in Figure 11. After the baseline period, a stimulus (RHIM, LHIM, or rest) was randomly
presented. A total of 15 stimuli (5 for each type) were presented in each trial. A one-second beep
sound followed a two-seconds green cross as pre-stimulus, with a 27-29 seconds inter-stimulus period.
Manipulation tasks were performed according to the result of the classification and performance
was evaluated as the percentage of correctly classified stimuli. The total duration of these trials
was considerably longer than in the low-level BCI. This is mainly due to the longer inter-stimulus
period, in which the manipulation tasks took place. Users underwent three sessions on different
days, performing five trials in each session.
17
3.5.2 Uncued manipulation trials
For uncued manipulation trials, all stimuli were replaced with the word “Imagine”, and the user
freely decided the task to perform, as explained in Section 3.5. A total of 15 stimuli were presented
in each trial. The stimulus, pre-stimulus, and inter-stimulus duration were the same as in the goal-
selection BCI cued manipulation trials (see Section 3.5.1). Immediately after the classification was
performed, and before the robot executed the task, the user was asked the type of intended stimulus
to emit. In these trials, performance was evaluated as the percentage of coincidences between the
intended and the classified stimulus type.
3.6 Analysis of data through P300 estimation
Reported assessments of mental fatigue through P300 amplitude and latency can be found in [38],
and [19]. In [19], mental fatigue was evaluated through EEG measurements. Participants’ P300
were measured during a modified Eriksen flanker task, replacing word stimuli with arrows, before
and after performing mental arithmetic tasks. A decreased P300 amplitude and an increased latency
were observed after performing arithmetic tasks, when users were mentally fatigued. Statistical
analysis revealed the most significant changes in amplitude and latency at channels O1, O2 and Pz,
probably as a reflection of visual processing during stimulus presentation of arrows. Similar to the
protocol used in [19] to assess mental fatigue, signals were segmented into 1-s stimulus-locked EEG
epochs from 200 ms before and 800 ms after stimulus presentation. These epochs were obtained for
the presentation of the word “Imagine” during uncued manipulation trials for both the process-
control and goal-selection BCI. For each trial, a representative waveform was obtained by averaging
the epochs from all stimuli. Then, the averaged waveforms were band-pass filtered at 1-10 Hz and
were used to calculate P300 amplitude and latency. The amplitude was considered as the most
positive peak within a 200-500 ms window immediately after stimulus presentation. Latency was
obtained as the time this peak appeared. Amplitude and latency values were obtained through this
procedure for all trials, sessions and subjects, in channels O1, O2 and Pz. A representation of an
obtained P300 waveform is shown in Figure 12 for these three channels.
In order to examine the differences of mental fatigue within and between-users in relationship
18
with the use of our two different BCI schemes, two-way ANOVA tests were performed on all
users: one for amplitude and one for latency. In this tests, influence of trial repetition (1-5),
channel location (O1, O2 and Pz), and their interaction were anayzed on both P300 features. The
number of replications was considered as three, representing the three uncued manipulation sessions
performed by the users. To further analyze mental fatigue related to continuous BCI manipulation,
one-way ANOVA (p < 0.05) tests were performed on each subject. Six one-way ANOVA tests
were performed per subject: three channels (O1, O2 and Pz) ×two P300 features (amplitude and
latency). These tests were performed in order to find which channel showed significant relationship
to the trial repetition factor. Then, amplitude and latency values of all users were compared using
the most significant channel from this analysis.
4 Results
A preliminary validation of our CGA model and AV algorithm can be found in [26] and [27],
respectively, hence we omit those details here. Therefore, this section shows the results of evaluating
the whole system in the context of our BCI implementations for four subjects (two on each BCI
type). Performance values were obtained for all subjects in training, cued and uncued trials,
according to the particularities of each experimental protocol. For training trials, performance
values correspond to the classifier accuracies shown in Table III. Performance for cued and uncued
manipulation trials were obtained as explained in Sections 3.4 and 3.5. Performance values included
in these results represent the average across trials for each session.
4.1 Performance of process-control BCI
Subject S1reached an accuracy level of 65% at its first training session, 64% at the second, and
63% by the third. During cued manipulation trials, performance started at 18%, then increased to
25% and 29% by the second and thirds sessions respectively. For uncued manipulation trials, the
user only moved far from the target at the first session (0%). For the second and third sessions,
User S1obtained performances of 14% and 17%. Subject S2showed a similar behavior to S1during
training trials, starting at 65%, and decreasing to 62% and 60% by the second and third sessions. In
19
cued manipulation trials, performance started at 33%, then increased to 37% by the second session,
and 45% by the third. For uncued manipulation trials, performance values started at 28% for the
first session, then decreased to 17% at the second, and increased to 37% by the third. Results for
process-control BCI performance are shown in Figure 13 for Users S1and S2, respectively.
4.2 Performance of goal-selection BCI
Subject S3started the training sessions at 56% of accuracy, 58% at the second session, and reached
78% at the third session. Performance for the cued manipulation trials, started at 40%, increasing to
56% at the second session, and decreasing to 49% by the third. During uncued manipulation trials,
performance started at 60% accuracy by the first session, 53% at the second, and 67% by the third.
User S4obtained performance values of 73% at the first session, 72% at the second, and decreased
to 60% at the third. During cued manipulation trials, the subject obtained performance values of
45% for the first session, 41 for the second, and 46 for the third. For uncued manipulation trials,
user performance started at 30%, and increased to 38% and 48% by the second and third sessions,
respectively. Results for goal-selection BCI performance are shown in Figure 14 for Users S3and
S4, respectively.
4.3 P300 analysis
The results for the two-way ANOVA tests are presented in Table IV. The results for the P300 latency
two-way ANOVA showed statistical significance for Subjects S1(p= 0.0147) and S2(p= 0.0001)
in the trial factor, but no significance was observed for channel and interaction factors. Users S3
and S4showed no satistical significance for any of the analyzed factors. For the P300 amplitude
two-way ANOVA, users showed smaller p-values in trial when compared to channel and interaction.
However, our tests did not show statistical significance for any factor or interaction.
The results for the one-way ANOVA tests are shown in Table V. The results for P300 latency
one-way ANOVA showed statistical significance for User S1at channel O1 (p= 0.0476), and for
User S4at channel O2 (p= 0.0242). Regarding Users S2and S3,p-values were not significant at
20
any channel. For the P300 amplitude one-way ANOVA, User S4showed statistical significance at
channel Pz (p= 0.0019). The tests for S1,S2, and S3revealed no statistical significance at none
of the three analyzed channels.
The results of the performed statistical tests allowed to observe differences between analyzing
latency and amplitude. Among all tests, greater changes were found in latency rather than in
amplitude. Based on these results, an evaluation and comparison on amplitude and latency values
was performed. These values were considered as those corresponding to the channel with the lowest
p-value on the one-way latency ANOVA results. The selected channels were O1 for S1, Pz for S2
and S3, and O2 for S4.
Amplitude values calculated for all uncued manipulation trials are shown in Figure 15 for each
session and user. Users S1and S4showed a similar behavior: a decreasing P300amplitude trend
in all sessions. In this case, the amplitude observed at the first trial was higher than that of the
last one. S2showed a decreasing trend as well for the first and second sessions, yet the opposite
was observed during the third session. S3presented an increasing amplitude trend for all sessions.
Here, the amplitude obtained at the last trial was higher than the one at the first trial.
Latency values can be observed in Figure 16 for all users and sessions. Subjects S1and S3
displayed an increasing P300 latency trend during the first and third sessions. A decreasing trend
was observed during the second session for these users. User S2presented an increasing latency
trend for all sessions. User S4showed an increase in latency during the first and second sessions,
and a decrease at the third.
5 Discussion
The implementation and integration of the CGA model and the AV algorithm allowed to success-
fully design a MI-based semi-autonomous BCI for manipulation tasks. When compared against
a low-level system, both BCIs were similar in terms of training protocol and control commands,
however the complexity of the executed tasks was different. The semi-autonomous goal-selection
BCI was superior in task complexity when compared to the process-control BCI, even though both
systems used the same control commands as input. While the process-control BCI might be used
21
to perform more general tasks, it demands a continuous awareness state from the user. Its output
are discrete low-level commands which in the long run might lead the user to a mental fatigue
condition. Although the semi-autonomous BCI is goal specific, it requires user attention only dur-
ing short time periods, making it theoretically less fatiguing. The semi-autonomous goal-selection
BCI works, in essence, in a more natural way to the user than the process-control BCI. This is
because when performing reaching tasks, people think on the main goal and the cerebellum process
the necessary information to successfully achieve it, rather than executing several discrete low-level
movements [39].
The selected features for the general classifiers of the users were mainly frontal, central, and
parietal electrodes in the µ(8-13 Hz) and β(13-30 Hz) brain rhythms, which are known to be phys-
iologically involved in the imaginary movement process. The selected channels for the classifiers
are consistent with reports of central activity as a reflection of motor cortex contralateral desyn-
chronization during imaginary movement [10], and fronto-parietal activation related to control of
spatial attention and motor planning during reaching tasks [40, 41].
Even though all users underwent the same training protocol, differences among them were
observed. Across training sessions, S1and S2maintained a relatively constant performance, while
S3showed a more notorious improvement. S4displayed a relatively high performance at the
first and second session, but it decreased at the third. During cued manipulation trials, all users
obtained low performance levels and none of them showed a significant improvement across sessions.
S1obtained below chance level (33%) performance during all sessions. Performance of users S2,
S3and S4was in general above chance level, but always remained below 60%. During uncued
manipulation trials, users S1and S2presented the lowest performance values, close-to and below
chance level. This indicates that these users were faced with difficulty while controlling the process-
control BCI. Performance of S3and S4during uncued manipulation trials was higher (around 40-
60%) when compared to S1and S2. Mean performance values across trials of users S3and S4failed
to reach the 70% considered as the theoretical threshold for practical MI-BCI use [42]. However,
their performance was evidently higher than the one obtained by users performing on the process-
control BCI. This might suggest that the designed semi-autonomous goal-selection BCI was easier
to manipulate than process-control BCI. Future research will address classification optimization to
22
increase system accuracy and ease of use.
As shown in Table III, selected channels and frequencies for feature extraction showed changes
across sessions for all users. This might suggest that the used channel/frequency selection method
is sensitive to intra- and inter-subjects brain variability. After training trials, a classifier with
fixed parameters was selected per subject and used in all BCI trials. Yet, constant adaptation of
the classifier parameters is required for optimal operation. Hence, an optimized feature selection
algorithm should be implemented to address this issue and increase efficiency in our proposed semi-
autonomous BCI. Such optimization was out of the scope of our work, but reports on how optimized
correlation-based feature selection methods are used in MI-BCIs can be found in [43, 44].
Another efficient approach for feature selection is the partial directed coherence (PDC) analysis,
which could help to identify relevant channels and features. Recently, a PDC-based analysis was
proposed in [45] to identify relevant features for MI tasks, and efficient classifiers were built based
on this procedure. Even more recently, a review on EEG classification algorithms highlights Rie-
mannian geometry-based classifiers as promising, as well as adaptive classification algorithms [46].
A simple implementation of an adaptive classifier for MI tasks was described in [47], which showed
an encouraging increase on classification accuracy. More novel classifiers based on Riemannian
geometry have shown good results on classifying brain-related MI tasks [48].
In regards to our selection of the P300 component to evaluate mental fatigue, such component
is not exclusively presented during non-frequent stimulus, rather its amplitude is enhanced, which
makes it a suitable control command for BCIs. P300 amplitude is larger during non-frequent stimuli,
and it is typically used/analyzed based on this argument. However, it has been demonstrated that
P300 responses can be observed to both frequent and non-frequent stimuli [49, 50]. In fact, under a
reaction-time regime, P300 is elicited on both predictable and unpredictable stimuli. Task demands
increase in this scenario, as users must decide when to respond in a fast and correct manner. This
leads to an enhancement of P300, independently of stimulus predictability [49]. In our study, users
were instructed to perform MI commands after stimulus presentation of the word Imagine, and
P300 components were analyzed immediately after stimulus onset. Although stimulus presentation
during uncued manipulation trials could be considered as predictable, P300 analysis holds validity,
as it was executed under a reaction-time regime.
23
Under those conditions, the results of the two-way ANOVA and one-way ANOVA tests showed
statistically significant changes of P300 latency for users S1,S2and S4. Except for S4, the tests
revealed no statistical significance for P300 amplitude. When comparing the amplitude and latency
values from Figures 15 and 16, a general trend was found among users: a decrease of amplitude
and an increase of latency. These trends in P300 features were presented along trial repetition,
that is, after continuous manipulation of the BCI. These changes in amplitude and latency might
be related to the generation of mental fatigue, as they are presented after a continuous execution
of manipulation tasks through the BCI. It has been shown that a decrease in P300 ampitude
and an increase in latency reflect decreased cognitive processing and lower attention levels [19].
Similar results have been found on a P300-BCI evaluated under different levels of mental workload
and fatigue [51]. When comparing subjects performing on the same BCI type, the user with the
lowest performance exhibited lower amplitude and higher latency values than the user with the
highest performance (although it was more evident for amplitude values). This was observed when
comparing both S1-S2and S3-S4. Subject S3displayed an interesting behaviour: an increasing
amplitude trend, as well as being the only subject which did not show statistical significance on
any P300 test. At the same time, it was the subject with the highest performance values on
uncued manipulation trials. A possible explanation to this particular case is that after performing
manipulation trials on the BCI, mental fatigue affected differently user S3than the rest of the
users. This difference in mental fatigue generation was reflected as non-significant changes in P300
parameters during the tests, as well as higher performance values.
6 Conclusions
Two BCI systems, a process-control and semi-autonomous goal-selection were implemented and
compared in terms of performance and mental fatigue. The process-control BCI allowed users to
perform three-dimensional movements on a robotic arm to reach for a target. The semi-autonomous
BCI allowed the user to execute manipulation tasks, using the same robotic arm, which include
reaching, picking and placing movements successfully. The increase of task complexity represented
by the semi-autonomous BCI was achieved without compromising the simplicity of the control pro-
24
cedure, as both BCIs were controlled through MI commands. Users performing on semi-autonomous
BCI obtained higher performance values when compared to users performing on low-level BCI. The
difference in task complexity also represented a difference in the mental fatigue experienced by
the users on different systems. A P300 amplitude decrease and latency increase was found as users
performed continuous BCI trials, which is consistent to reports of mental fatigue detection on EEG.
We also present strong evidence of the advantages of semi-autonomous BCI in terms of perfor-
mance and mental fatigue. It is also important to address the potential use of the P300 waveform
as an indicator of mental fatigue during BCI testing, training, and evaluation. Techniques to fur-
ther reduce mental fatigue while using BCI systems might provide an increase in BCI patients
acceptance rate, as well as a possible path to tackle BCI illiteracy. It is of great importance that
the user finds the system as non-fatiguing and easy to use in order to provide a more comfortable
and efficient assistance. This also facilitates the user in the process of learning how to control the
BCI, which can be used together with different strategies to further personalize the system (see,
e.g., a previous work by our group in how to select a feedback modality that better enhances the
volunteer’s capacity to operate a BCI system [52]).
The development of more advanced semi-autonomous BCI systems which provide information
about the environment during specific tasks will allow to further enhance performance and usability.
Semi-autonomous BCIs offer users the possibility to perform more complex tasks in a simple, less
fatiguing way. In our system, the integration of the AV and CGA algorithms provided a real-
time calculation of the robot’s inverse model, offering flexibility to implement more complex object
manipulation tasks in a dynamic environment. The use of a higher DOF robotic arm, as well as the
implementation of object recognition techniques might improve the complexity of the manipulation
tasks to be performed, while using the same MI commands to control the BCI, ensuring control
simplicity to the users.
References
[1] L. Bi, X.-A. Fan, Y. Liu, EEG-based brain-controlled mobile robots: a survey, IEEE Transac-
tions on Human-Machine Systems 43 (2) (2013) 161–176.
25
[2] H. Cecotti, A self-paced and calibration-less SSVEP-based braincomputer interface speller,
IEEE Transactions on Neural Systems and Rehabilitation Engineering 18 (2) (2010) 127–133.
[3] C. Wang, B. Xia, J. Li, W. Yang, Dianyun, A. C. Velez, H. Yang, Motor imagery BCI-based
robot arm system, in: Proceedings of the 2011 Seventh International Conference on Natural
Computation, IEEE, Shanghai, China, 2011, pp. 181–184.
[4] D. P. Murphy, O. Bai, A. S. Gorgey, J. Fox, W. T. Lovegreen, B. W. Burkhardt, R. Atri,
J. S. Marquez, Q. Li, , D.-Y. Fei, Electroencephalogram-based braincomputer interface and
lower-limb prosthesis control: A case study, Frontiers in Neurology 8 (696).
[5] C. J. Bell, P. Shenoy, R. Chalodhorn, R. P. N. Rao, Control of a humanoid robot by a noninva-
sive braincomputer interface in humans, Journal of Neural Engineering 5 (2) (2008) 214–220.
[6] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, T. M. Vaughan, Brain-
computer interfaces for communication and control, Clinical Neurophysiology 113 (6) (2002)
767–791.
[7] A. S. Royer, M. L. Rose, B. He, Goal selection versus process control while learning to use a
brain-computer interface, Journal of Neural Engineering 8 (3) (2012) 1–20.
[8] Z. ´
scan, V. V. Nikulin, Steady state visual evoked potential (SSVEP) based brain-computer
interface (BCI) performance under different perturbations, PLOS ONE 13 (1).
[9] R. Fazel-Rezai, B. Z. Allison, C. Guger, E. W. Sellers, S. C. Kleih, A. K¨ubler, P300 brain
computer interface: current challenges and emerging trends, Frontiers in Neuroengineering
5 (14) (2012) 1–14.
[10] D. J. McFarland, L. A. Miner, T. M. Vaughan, J. R. Wolpaw, Mu and beta rhythm topogra-
phies during motor imagery and actual movements, Brain Topography 12 (3) (2000) 177–186.
[11] R. Leeb, D. Friedman, G. R. Muller-Putz, R. Scherer, M. Slater, G. Pfurtscheller, Self-paced
(asynchronous) BCI control of a wheelchair in virtual environments: A case study with a
tetraplegic, Computational Intelligence and Neuroscience.
26
[12] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. K¨ubler, J. Perel-
mouter, E. Taub, H. Flor, A spelling device for the paralysed, Nature 398 (1999) 297–298.
[13] J. R. Wolpaw, D. J. McFarland, Control of a two-dimensional movement signal by a noninvasive
brain-computer interface in humans, Proceedings of the National Academy of Sciences 101 (54)
(2004) 17849–17854.
[14] G. Pfurtscheller, C. Neuper, G. R. Mller, B. Obermaier, G. Krausz, A. Schl¨ogl, R. Scherer,
B. Graimann, C. Keinrath, D. Skliris, M. Wortz, G. Supp, C. Schrank, Graz-BCI: state of
the art and clinical applications, IEEE Transactions on Neural Systems and Rehabilitation
Engineering 11 (2) (2003) 1–4.
[15] D. J. McFarland, D. J. Krusienski, W. A. Sarnacki, J. R. Wolpaw, Emulation of computer
mouse control with a noninvasive braincomputer interface, Journal of Neural Engineering 5 (2)
(2008) 101–110.
[16] E. V.C.Friedrich, D. J.McFarland, C. Neuper, T. M.Vaughan, P. Brunner, J. R.Wolpaw, Em-
ulation of computer mouse control with a noninvasive braincomputer interface, Biological
Psychology 80 (2) (2009) 169–175.
[17] D. Valbuena, M. Cyriacks, O. Friman, I. Volosyak, A. Gr¨aser, Brain-computer interface for
high-level control of rehabilitation robotic systems, in: Proceedings of the 2007 IEEE 10th In-
ternational Conference on Rehabilitation Robotics, IEEE, Noordwijk, The Netherlands, 2007,
pp. 619–625.
[18] B. Graimann, B. Allison, C. Mandel, T. L¨uth, D. Valbuena, A. Gr¨aser, Non-invasive Brain-
Computer Interfaces for Semi-autonomous Assistive Devices, Springer London, London, 2008,
pp. 113–138.
[19] S.-Y. Cheng, H.-T. Hsur, Mental Fatigue Measurement Using EEG, Intech Open, 2011, pp.
203–228.
[20] J. B. Isreal, G. L. Chesney, C. D. Wickens, E. Donchin, P300 and tracking difficulty: evidence
for multiple resources in dual-task performance, Psychophysiology 17 (3) (1980) 259–273.
27
[21] A. Murata, A. Uetake, Y. Takasawa, Evaluation of mental fatigue using feature parameter
extracted from event-related potential, International Journal of Industrial Ergonomics 35 (8)
(2005) 761–770.
[22] J. N. Mak, D. J. McFarland, T. M. Vaughan, L. M. McCane, P. Z. Tsui, D. J. Zeitlin, E. W.
Sellers, J. R. Wolpaw, EEG correlates of P300-based braincomputer interface (BCI) perfor-
mance in people with amyotrophic lateral sclerosis, Journal of Neural Engineering 9 (2) (2012)
11.
[23] X. Perrin, R. Chavarriaga, F. Colas, RolandSiegwart, J. del R.Mill´an, Brain-coupled interac-
tion for semi-autonomous navigation of an assistive robot, Robotics and Autonomous Systems
58 (12) (2010) 1246–1255.
[24] D. G¨ohring, D. Latotzky, M. Wang, R. Rojas, Semi-autonomous car control using brain com-
puter interfaces, in: K.-J. Y. Sukhan Lee, Hyungsuck Cho, J. Lee (Eds.), Intelligent Au-
tonomous Systems 12, Vol. 2 of Advances in Intelligent Systems and Computing, Springer,
Berlin, Jeju Island, Korea, 2013, pp. 393–408.
[25] D. Hildenbrand, D. Fontijne, Y. Wang, M. Alexa, L. Dorst, Competitive runtime performance
for inverse kinematics algorithms using conformal geometric algebra, EUROGRAPHICS.
[26] M. A. Ram´ırez-Moreno, D. Guti´errez-Ruiz, Modeling a robotic arm with conformal geometric
algebra in a brain-computer interface, in: Proceedings of the 2018 International Conference
on Electronics, Communications and Computers (CONIELECOMP), IEEE, Cholula, Mexico,
2018, pp. 11–17.
[27] M. A. Ram´ırez-Moreno, S. M. Orozco-Soto, J. M. Ibarra-Zannatha, D. Guti´errez-Ruiz, Artifi-
cial vision algorithm for object manipulation with a robotic arm in a semi-autonomous brain-
computer interface, in: S. M. Maria Chiara Carrozza, J. L. Pons (Eds.), Wearable Robotics:
Challenges and Trends, Vol. 22 of Biosystems & Biorobotics, Springer, Cham, Pisa, Italy, 2019,
pp. 187–191.
28
[28] M. W. Spong, S. Hutchinson, M. Vidyasagar, Robot modeling and control, 1st Edition, Wiley
select coursepack, John Wiley & Sons Inc, 2005, Ch. Forward and Inverse Kinematics, pp.
85–98.
[29] O. Carbajal-Espinosa, L. Gonz´alez-Jim´enez, J. Oviedo-Barriga, B. Castillo-Toledo,
A. Loukianov, E. Bayro-Corrochano, Modeling and pose control of robotic manipulators and
legs using conformal geometric algebra, Computaci´on y Sistemas 19 (3) (2015) 475–486.
[30] D. Hildenbrand, J. Zamora, E. Bayro-Corrochano, Inverse kinematics computation in com-
puter graphics and robotics using conformal geometric algebra, Advances in Applied Clifford
Algebras 18 (3-4) (2008) 699–713.
[31] C. Perwass, Geometric Algebra with Applications in Engineering, Geometry and Computing,
Springer-Verlag, 2009, Ch. Introduction, pp. 1–23.
[32] A. L. Kleppe, O. Egeland, Inverse kinematics for industrial robots using conformal geometric
algebra, Modeling, Identification and Control 37 (1) (2016) 63–75.
[33] L. Griggs, F. Fahimi, Introduction and testing of an alternative control approach for a robotic
prosthetic arm, The Open Biomedical Engineering Journal 8 (2014) 93–105.
[34] E. I. Organick, A FORTRAN IV Primer, 1st Edition, Addison-Wesley, 1966, p. 42, some
processors also offer the library function called ATAN2, a function of two arguments (opposite
and adjacent).
[35] E. Dubrofsky, Homography estimation, Master’s thesis, The University of British Columbia
(2009).
[36] Y. Renard, F. Lotte, G. Gibert, M. Congedo, E. Maby, V. Delannoy, O. Bertrand, A. L´ecuyer,
Openvibe: An open-source software platform to design, test, and use braincomputer interfaces
in real and virtual environments, Presence: teleoperators and virtual environments 19 (1)
(2010) 35–53.
[37] F. Lotte, M. Congedo, A. L´ecuyer, F. Lamarche, B. Arnaldi, A review of classification algo-
rithms for EEG-based brain-computer interfaces, Journal of Neural Engineering 4 (2).
29
[38] A. Uetake, A. Murata, Assessment of mental fatigue during VDT task using event-related
potential (P300), in: Proceedings - IEEE International Workshop on Robot and Human In-
teractive Communication, IEEE, Osaka, Japan, 2000, pp. 235–240.
[39] P. J.E.Attwell, S. F.Cooke, C. H.Yeo, Cerebellar function in consolidation of a motor memory,
Neuron 34 (6) (2002) 1011–1020.
[40] P. Praamstra, L. Boutsen, G. W. Humphreys, Frontoparietal control of spatial attention and
motor intention in human EEG, Journal of Neurophysiology 94 (1) (2005) 764–774.
[41] J. R. Naranjo, A. Brovelli, R. Longo, R. Budai, R. Kristeva, P. P. Battaglini, EEG dynamics of
the frontoparietal network during reaching preparation in humans, NeuroImage 34 (4) (2007)
1673–1682.
[42] C. Guger, G. Edlinger, W. Harkam, I. Niedermayer, G. Pfurtscheller, How many people are
able to operate an EEG-based brain-computer interface (BCI)?, IEEE Transactions on Neural
Systems and Rehabilitation Engineering 11 (2) (2003) 145–147.
[43] J. Jin, Y. Miao, I. Daly, C. Zuo, D. Hu, A. Cichocki, Correlation-based channel selection and
regularized feature optimization for MI-based BCI, Neural Networks 118 (2019) 262–270.
[44] J. K. Feng, J. Jin, I. Daly, J. Zhou, X. Wang, A. Cichocki, An optimized channel selection
method based on multifrequency csp-rank for motor imagery-based bci system, Computational
Intelligence and Neuroscience 2019 (2019) 10.
[45] J. A. Gaxiola-Tirado, R. Salazar-Varas, D. Guti´errez, Using the partial directed coherence
to assess functional connectivity in electroencephalography data for braincomputer interfaces,
IEEE Transactions on Cognitive and Developmental Systems 10 (3) (2017) 776–783.
[46] F. Lotte, L. Bougrain, A. Cichocki, M. Clerc, M. Congedo, A. Rakotomamonjy, F. Yger, A
review of classification algorithms for EEG-based brain-computer interfaces: a 10 year update,
Journal of Neural Engineering 15 (3) (2018) 28.
[47] P. Shenoy, M. Krauledat, B. Blankertz, R. P. Rao, K.-R. Mller, Towards adaptive classification
for BCI, Journal of Neural Engineering 3 (13) (2006) R13–R23.
30
[48] S. Guan, K. Zhao, S. Yang, Motor imagery EEG classification based on decision tree framework
and riemannian geometry, Computational Inteligence and Neuroscience 2019 (2019) 13.
[49] E. Donchin, M. Kubovy, M. Kutas, R. Jonhson, R. I. Herning, Graded changes in evoked
response (P300) amplitude as a function of cognitive activity, Perception & Psychophysics
14 (2) (1973) 319–324.
[50] R. Verleger, K. ´
Smigasiewicz, Do rare stimuli evoke large P3S by being unexpected? a com-
parison of oddball effects between standard-oddball and prediction-oddball tasks, Advances in
Cognitive Psychology 12 (2) (2016) 88–104.
[51] I. K¨athner, S. C.Wriessnegger, G. R.M¨uller-Putz, A. K¨ubler, S. Halder, Effects of mental
workload and fatigue on the P300, alpha and theta band power during operation of an ERP
(P300) braincomputer interface, Biological Psychology 102 (2014) 118–129.
[52] I. N. Angulo-Sherman, D. Guti´errez, A link between the increase in electroencephalographic
coherence and performance improvement in operating a brain-computer interface, Computa-
tional Intelligence and Neuroscience 2015 (2015) 67.
31
Captions of Figures
1 Joints and links of our 5-DOF Dynamixel AX-18A Smart Robotic Arm. . . . . . . . 34
2 Planes πeand πbrepresenting the orientation of the final effector and the robot base,
respectively.......................................... 35
3 The intersection of spheres s0(bottom) and s1(top) define the circle c0, from which
we find the position of point xh. ............................. 36
4 The intersection of spheres s1(left) and sh(right) define the circle c2, from which
we find the position of joint J2............................... 37
5 The intersection of spheres s2(bottom) and se(top) define the circle c3, from which
it is possible to find the position of joint J3........................ 38
6 Robot and items on the table as seen by the camera during semi-autonomous BCI
trials. ............................................ 39
7 Required steps of the AV algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
8 Visual representation of contours and centroids of the items in the table, calculated
by the AV algorithm in order to obtain their real-world coordinates. . . . . . . . . . 41
9 Representative r2map obtained during one training session. r2values here shown
were measured under conditions RHIM-Rest for all channels and frequencies. . . . . 42
10 Setup of the process-control BCI. The windows shown in the screen are used for
visualization of stimuli, indicating the current axis of the movement, and viewing of
the robot performing the manipulation tasks. . . . . . . . . . . . . . . . . . . . . . . 43
11 Setup of the semi-autonomous goal-selection BCI, as seen by the user. The windows
are used for stimulus presentation and visualization of the manipulation tasks. . . . 44
12 Representation of a P300 waveform calculated for channels O1, O2 and Pz. . . . . . 45
13 Performance for Users S1(top) and S2(bottom) in process-control BCI during train-
ing, cued, and uncued manipulation trials (left, middle and right columns respec-
tively). Bars indicate one standard deviation. . . . . . . . . . . . . . . . . . . . . . . 46
32
14 Performance for Users S3(top) and S4(bottom) in semi-autonomous goal-selection
BCI during training, cued, and uncued manipulation trials (left, middle and right
columns respectively). Bars indicate one standard deviation. . . . . . . . . . . . . . 47
15 Amplitude of P300 waveform during all trials and experiments for all subjects during
uncuedmanipulationtrials. ................................ 48
16 Latency of P300 waveform during all trials and experiments for all subjects during
uncuedmanipulationtrials. ................................ 49
Captions of Tables
I Representations of the conformal geometric entities . . . . . . . . . . . . . . . . . . . 50
II Parameters for joint angles calculation . . . . . . . . . . . . . . . . . . . . . . . . . . 51
III Features: EEG channels and frequency range (in Hz), and mean accuracy of LDA
classifiers for all subjects and training sessions . . . . . . . . . . . . . . . . . . . . . . 52
IV Two-way ANOVA results for P300 latency and amplitude . . . . . . . . . . . . . . . 53
V One-way ANOVA results for P300 latency and amplitude on channels O1, O2 and Pz. 54
33
Figure 1: Joints and links of our 5-DOF Dynamixel AX-18A Smart Robotic Arm.
34
Figure 2: Planes πeand πbrepresenting the orientation of the final effector and the robot base,
respectively.
35
Figure 3: The intersection of spheres s0(bottom) and s1(top) define the circle c0, from which we
find the position of point xh.
36
Figure 4: The intersection of spheres s1(left) and sh(right) define the circle c2, from which we
find the position of joint J2.
37
Figure 5: The intersection of spheres s2(bottom) and se(top) define the circle c3, from which it is
possible to find the position of joint J3.
38
Figure 6: Robot and items on the table as seen by the camera during semi-autonomous BCI trials.
39
(a) Segmentation and binarization
(b) Centroid calculation
(c) Homography transformation
Figure 7: Required steps of the AV algorithm.
40
Figure 8: Visual representation of contours and centroids of the items in the table, calculated by
the AV algorithm in order to obtain their real-world coordinates.
41
0 10 20 30 40 50 60 70
Fp1
Fp2
F3
F4
C3
C4
P3
P4
O1
O2
F7
F8
T3
T4
T5
T6
Cz
Fz
Pz
Frequency (Hz)
RHIM − Rest (r2)
Channel
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Figure 9: Representative r2map obtained during one training session. r2values here shown were
measured under conditions RHIM-Rest for all channels and frequencies.
42
Figure 10: Setup of the process-control BCI. The windows shown in the screen are used for visual-
ization of stimuli, indicating the current axis of the movement, and viewing of the robot performing
the manipulation tasks.
43
Figure 11: Setup of the semi-autonomous goal-selection BCI, as seen by the user. The windows
are used for stimulus presentation and visualization of the manipulation tasks.
44
Figure 12: Representation of a P300 waveform calculated for channels O1, O2 and Pz.
45
123
Session
0
20
40
60
80
100
Performance (%)
(a) S1, Training.
123
Session
0
20
40
60
80
100
Performance (%)
(b) S1, Cued manipulation.
123
Session
0
20
40
60
80
100
Performance (%)
(c) S1, Uncued manipulation.
123
Session
0
20
40
60
80
100
Performance (%)
(d) S2, Training.
123
Session
0
20
40
60
80
100
Performance (%)
(e) S2, Cued manipulation.
123
Session
0
20
40
60
80
100
Performance (%)
(f) S2, Uncued manipulation.
Figure 13: Performance for Users S1(top) and S2(bottom) in process-control BCI during training,
cued, and uncued manipulation trials (left, middle and right columns respectively). Bars indicate
one standard deviation.
46
123
Session
0
20
40
60
80
100
Performance (%)
(a) S3, Training.
123
Session
0
20
40
60
80
100
Performance (%)
(b) S3, Cued manipulation.
123
Session
0
20
40
60
80
100
Performance (%)
(c) S3, Uncued manipulation.
123
Session
0
20
40
60
80
100
Performance (%)
(d) S4, Training.
123
Session
0
20
40
60
80
100
Performance (%)
(e) S4, Cued manipulation.
123
Session
0
20
40
60
80
100
Performance (%)
(f) S4, Uncued manipulation.
Figure 14: Performance for Users S3(top) and S4(bottom) in semi-autonomous goal-selection BCI
during training, cued, and uncued manipulation trials (left, middle and right columns respectively).
Bars indicate one standard deviation.
47
12345
Trial
0
2
4
6
Amplitude (µV)
(a) Subject S1, Session 1
12345
Trial
0
2
4
6
Amplitude (µV)
(b) Subject S1, Session 2
12345
Trial
0
2
4
6
Amplitude (µV)
(c) Subject S1, Session 3
12345
Trial
0
2
4
6
Amplitude (µV)
(d) Subject S2, Session 1
12345
Trial
0
2
4
6
Amplitude (µV)
(e) Subject S2, Session 2
12345
Trial
0
2
4
6
Amplitude (µV)
(f) Subject S2, Session 3
12345
Trial
0
2
4
6
Amplitude (µV)
(g) Subject S3, Session 1
12345
Trial
0
2
4
6
Amplitude (µV)
(h) Subject S3, Session 2
12345
Trial
0
2
4
6
Amplitude (µV)
(i) Subject S3, Session 3
12345
Trial
0
2
4
6
Amplitude (µV)
(j) Subject S4, Session 1
12345
Trial
0
2
4
6
Amplitude (µV)
(k) Subject S4, Session 2
12345
Trial
0
2
4
6
Amplitude (µV)
(l) Subject S4, Session 3
Figure 15: Amplitude of P300 waveform during all trials and experiments for all subjects during
uncued manipulation trials.
48
12345
Trial
0
200
400
600
Latency (ms)
(a) Subject S1, Session 1
12345
Trial
0
200
400
600
Latency (ms)
(b) Subject S1, Session 2
12345
Trial
0
200
400
600
Latency (ms)
(c) Subject S1, Session 3
12345
Trial
0
200
400
600
Latency (ms)
(d) Subject S2, Session 1
12345
Trial
0
200
400
600
Latency (ms)
(e) Subject S2, Session 2
12345
Trial
0
200
400
600
Latency (ms)
(f) Subject S2, Session 3
12345
Trial
0
200
400
600
Latency (ms)
(g) Subject S3, Session 1
12345
Trial
0
200
400
600
Latency (ms)
(h) Subject S3, Session 2
12345
Trial
0
200
400
600
Latency (ms)
(i) Subject S3, Session 3
12345
Trial
0
200
400
600
Latency (ms)
(j) Subject S4, Session 1
12345
Trial
0
200
400
600
Latency (ms)
(k) Subject S4, Session 2
12345
Trial
0
200
400
600
Latency (ms)
(l) Subject S4, Session 3
Figure 16: Latency of P300 waveform during all trials and experiments for all subjects during
uncued manipulation trials.
49
Table I: Representations of the conformal geometric entities
Entity Standard Dual
Point P=x+1
2x2e+e0
Point pair P p =s1s2s3P p=x1x2
Line l=π1π2l=x1x2e
Circle c=s1s2c=x1x2x3
Sphere s=P1
2r2es=x1x2x3x4
50
Table II: Parameters for joint angles calculation
k αkβk
0e2(π
ee)·e0
2 (L12 ·e0)·e(L23 ·e0)·e
3 (L23 ·e0)·e(L3e·e0)·e
51
Table III: Features: EEG channels and frequency range (in Hz), and mean accuracy of LDA
classifiers for all subjects and training sessions
Subject Session Features Accuracy
1 C4 (11-15), P3 (11-15), C3 (7-11), Fp2 (13-17) 65%
S12 C3 (7-13), P3 (9-13), P4 (9-13), P4 (13-17), Cz (9-13) 64%
3 C4 (9-15), C3 (13-17), P3 (23-27), P4 (25-29) 63%
1 C4 (9-13), P4 (11-15), F4 (17-21), F4 (11-15), Cz (11-15) 65%
S22 C4 (15-19), C3 (19-23), P4 (21-25), Cz (19-23), FP1 (25-29) 62%
3 C4 (19-23), C4 (17-21), Cz (19-23), P3 (21-25), F3 (19-23) 60%
1 F4 (15-21), F3 (9-11), P4 (9-13), F4 (15-21) 56%
S32 C3 (11-15), C4 (7-11), P3 (15-19), P4 (11-15) 61%
3 C3 (9-15), C4 (11-15), P3 (11-15) 78%
1 C4 (9-13), C3 (9-13), P4 (17-21), F4 (19-23), F3 (19-23) 73%
S42 F3 (11-15), C4 (7-13), P4 (17-21), P3 (17-21), F4 (19-23) 72%
3 C4 (7-11), C3 (13-17), F4 (17-21), C4 (11-15) 60%
52
Table IV: Two-way ANOVA results for P300 latency and amplitude
Latency Amplitude
Subject Trial Channel Interaction Trial Channel Interaction
S1 F= 3.69
p=0.0147
F= 0.25
p= 0.782
F= 0.69
p= 0.6994
F= 2.45
p= 0.0676
F= 3.08
p= 0.0609
F= 0.42
p= 0.8969
S2 F= 9.33
p=0.0001
F= 0.13
p= 0.8816
F= 0.1
p= 0.999
F= 1.26
p= 0.3074
F= 0.48
p= 0.6217
F= 0.14
p= 0.9970
S3 F= 0.03
p= 0.9983
F= 0.59
p= 0.5604
F= 0.44
p= 0.8891
F= 0.83
p= 0.5191
F= 0.01
p= 0.9924
F= 0
p= 1
S4 F= 1.05
p= 0.4003
F= 1.5
p= 0.24
F= 0.3
p= 0.9589
F= 1.81
p= 0.1534
F= 0.17
p= 0.8405
F= 0.13
p= 0.9972
53
Table V: One-way ANOVA results for P300 latency and amplitude on channels O1, O2 and Pz.
Latency Amplitude
Subject O1 O2 Pz O1 O2 Pz
S1 F= 3.54
p=0.0476
F= 0.76
p= 0.5767
F= 0.55
p= 0.7055
F= 1.79
p= 0.2066
F= 1.61
p= 0.246
F= 0.82
p= 0.5421
S2 F= 2.1
p= 0.1554
F= 2.13
p= 0.1518
F= 2.5
p= 0.1091
F= 0.3
p= 0.8705
F= 0.23
p= 0.9171
F= 0.1
p= 0.9806
S3 F= 0.37
p= 0.8238
F= 0.12
p= 0.9731
F= 0.54
p= 0.7089
F= 1.31
p= 0.3322
F= 0.13
p= 0.9687
F= 0.41
p= 0.8043
S4 F= 1.68
p= 0.2311
F= 4.52
p=0.0242
F= 2.45
p= 0.1138
F= 2.99
p= 0.0728
F= 2.27
p= 0.1336
F= 9.5
p=0.0019
54
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Background: Due to the redundant information contained in multichannel electroencephalogram (EEG) signals, the classification accuracy of brain-computer interface (BCI) systems may deteriorate to a large extent. Channel selection methods can help to remove task-independent electroencephalogram (EEG) signals and hence improve the performance of BCI systems. However, in different frequency bands, brain areas associated with motor imagery are not exactly the same, which will result in the inability of traditional channel selection methods to extract effective EEG features. New method: To address the above problem, this paper proposes a novel method based on common spatial pattern- (CSP-) rank channel selection for multifrequency band EEG (CSP-R-MF). It combines the multiband signal decomposition filtering and the CSP-rank channel selection methods to select significant channels, and then linear discriminant analysis (LDA) was used to calculate the classification accuracy. Results: The results showed that our proposed CSP-R-MF method could significantly improve the average classification accuracy compared with the CSP-rank channel selection method.
Article
Full-text available
This paper proposes a novel classification framework and a novel data reduction method to distinguish multiclass motor imagery (MI) electroencephalography (EEG) for brain computer interface (BCI) based on the manifold of covariance matrices in a Riemannian perspective. For method 1, a subject-specific decision tree (SSDT) framework with filter geodesic minimum distance to Riemannian mean (FGMDRM) is designed to identify MI tasks and reduce the classification error in the nonseparable region of FGMDRM. Method 2 includes a feature extraction algorithm and a classification algorithm. The feature extraction algorithm combines semisupervised joint mutual information ( semi -JMI) with general discriminate analysis (GDA), namely, SJGDA, to reduce the dimension of vectors in the Riemannian tangent plane. And the classification algorithm replaces the FGMDRM in method 1 with k-nearest neighbor (KNN), named SSDT-KNN. By applying method 2 on BCI competition IV dataset 2a, the kappa value has been improved from 0.57 to 0.607 compared to the winner of dataset 2a. And method 2 also obtains high recognition rate on the other two datasets.
Article
Full-text available
Objective. Most current electroencephalography (EEG)-based brain–computer interfaces (BCIs) are based on machine learning algorithms. There is a large diversity of classifier types that are used in this field, as described in our 2007 review paper. Now, approximately ten years after this review publication, many new algorithms have been developed and tested to classify EEG signals in BCIs. The time is therefore ripe for an updated review of EEG classification algorithms for BCIs. Approach. We surveyed the BCI and machine learning literature from 2007 to 2017 to identify the new classification approaches that have been investigated to design BCIs. We synthesize these studies in order to present such algorithms, to report how they were used for BCIs, what were the outcomes, and to identify their pros and cons. Main results. We found that the recently designed classification algorithms for EEG-based BCIs can be divided into four main categories: adaptive classifiers, matrix and tensor classifiers, transfer learning and deep learning, plus a few other miscellaneous classifiers. Among these, adaptive classifiers were demonstrated to be generally superior to static ones, even with unsupervised adaptation. Transfer learning can also prove useful although the benefits of transfer learning remain unpredictable. Riemannian geometry-based methods have reached state-of-the-art performances on multiple BCI problems and deserve to be explored more thoroughly, along with tensor-based methods. Shrinkage linear discriminant analysis and random forests also appear particularly useful for small training samples settings. On the other hand, deep learning methods have not yet shown convincing improvement over state-of-the-art BCI methods. Significance. This paper provides a comprehensive overview of the modern classification algorithms used in EEG-based BCIs, presents the principles of these methods and guidelines on when and how to use them. It also identifies a number of challenges to further advance EEG classification in BCI.
Article
Full-text available
Brain-computer interface (BCI) paradigms are usually tested when environmental and biological artifacts are intentionally avoided. In this study, we deliberately introduced different perturbations in order to test the robustness of a steady state visual evoked potential (SSVEP) based BCI. Specifically we investigated to what extent a drop in performance is related to the degraded quality of EEG signals or rather due to increased cognitive load. In the online tasks, subjects focused on one of the four circles and gave feedback on the correctness of the classification under four conditions randomized across subjects: Control (no perturbation), Speaking (counting loudly and repeatedly from one to ten), Thinking (mentally counting repeatedly from one to ten), and Listening (listening to verbal counting from one to ten). Decision tree, Naïve Bayes and K-Nearest Neighbor classifiers were used to evaluate the classification performance using features generated by canonical correlation analysis. During the online condition, Speaking and Thinking decreased moderately the mean classification accuracy compared to Control condition whereas there was no significant difference between Listening and Control conditions across subjects. The performances were sensitive to the classification method and to the perturbation conditions. We have not observed significant artifacts in EEG during perturbations in the frequency range of interest except in theta band. Therefore we concluded that the drop in the performance is likely to have a cognitive origin. During the Listening condition relative alpha power in a broad area including central and temporal regions primarily over the left hemisphere correlated negatively with the performance thus most likely indicating active suppression of the distracting presentation of the playback. This is the first study that systematically evaluates the effects of natural artifacts (i.e. mental, verbal and audio perturbations) on SSVEP-based BCIs. The results can be used to improve individual classification performance taking into account effects of perturbations.
Article
Full-text available
Objective The purpose of this study was to establish the feasibility of manipulating a prosthetic knee directly by using a brain–computer interface (BCI) system in a transfemoral amputee. Although the other forms of control could be more reliable and quick (e.g., electromyography control), the electroencephalography (EEG)-based BCI may provide amputees an alternative way to control a prosthesis directly from brain. Methods A transfemoral amputee subject was trained to activate a knee-unlocking switch through motor imagery of the movement of his lower extremity. Surface scalp electrodes transmitted brain wave data to a software program that was keyed to activate the switch when the event-related desynchronization in EEG reached a certain threshold. After achieving more than 90% reliability for switch activation by EEG rhythm-feedback training, the subject then progressed to activating the knee-unlocking switch on a prosthesis that turned on a motor and unlocked a prosthetic knee. The project took place in the prosthetic department of a Veterans Administration medical center. The subject walked back and forth in the parallel bars and unlocked the knee for swing phase and for sitting down. The success of knee unlocking through this system was measured. Additionally, the subject filled out a questionnaire on his experiences. Results The success of unlocking the prosthetic knee mechanism ranged from 50 to 100% in eight test segments. Conclusion The performance of the subject supports the feasibility for BCI control of a lower extremity prosthesis using surface scalp EEG electrodes. Investigating direct brain control in different types of patients is important to promote real-world BCI applications.
Article
Full-text available
In this paper, we propose a statistical selection procedure by which various mental tasks can be characterized by specific brain functional connectivity. Different connectivity patterns are identified by the partial directed coherence (PDC) which is a frequency-domain metric that provides information about directionality in the interaction between signals recorded at different sensors. The basis of our selection is a statistical analysis of the directed connectivities revealed by their repeated appearance and larger PDC magnitudes in sets of electroencephalography (EEG) sensors treated as networks. Hence, our proposed method identifies significant differences between directed connectivities on EEG-sensor networks that are specific to the mental tasks involved. A combinatory analysis of different possible networks allows us to find those that characterize and discriminate the tasks and, as proof-of-concept, we analyze the connectivities of movement imageries (MI) used in the operation of a brain-computer interface (BCI). The directed interconnections revealed by our proposed method are in agreement with brain functional connectivities already reported for MI, and good classification rates are achieved when such interconnections are used as features in a Mahalanobis-distance-based clas-sifier.
Article
Full-text available
T he P3 component of event-related potentials increases when stimuli are rarely presented. It has been assumed that this oddball effect (rare-frequent difference) reflects the unexpectedness of rare stimuli. The assumption of unexpectedness and its link to P3 amplitude were tested here. A standard-oddball task requiring alternative key-press responses to frequent and rare stimuli was compared with an oddball-prediction task where stimuli had to be first predicted and then confirmed by key-pressing. Oddball effects in the prediction task depended on whether the frequent or the rare stimulus had been predicted. Oddball effects on P3 amplitudes and error rates in the standard oddball task closely resembled effects after frequent predictions. This corroborates the notion that these effects occur because frequent stimuli are expected and rare stimuli are unexpected. However, a closer look at the prediction task put this notion into doubt because the modifications of oddball effects on P3 by expectancies were entirely due to effects on frequent stimuli, whereas the large P3 amplitudes evoked by rare stimuli were insensitive to predictions (unlike response times and error rates). Therefore, rare stimuli cannot be said to evoke large P3 amplitudes because they are unexpected. We discuss these diverging effects of frequency and expectancy, as well as general differences between tasks, with respect to concepts and hypotheses about P3b’s function and conclude that each discussed concept or hypothesis encounters some problems, with a conception in terms of subjective relevance assigned to stimuli offering the most consistent account of these basic effects.
Article
Full-text available
This paper shows how the recently developed formulation of conformal geometric algebra can be used for analytic inverse kinematics of two six-link industrial manipulators with revolute joints. The paper demonstrates that the solution of the inverse kinematics in this framework relies on the intersection of geometric objects like lines, circles, planes and spheres, which provides the developer with valuable geometric intuition about the problem. It is believed that this will be very useful for new robot geometries and other mechanisms like cranes and topside drilling equipment. The paper extends previous results on inverse kinematics using conformal geometric algebra by providing consistent solutions for the joint angles for the different configurations depending on shoulder left or right, elbow up or down, and wrist flipped or not. Moreover, it is shown how to relate the solution to the Denavit-Hartenberg parameters of the robot. The solutions have been successfully implemented and tested extensively over the whole workspace of the manipulators.
Article
Full-text available
Controlling the pose of a manipulator involves finding the correct configuration of the robot's elements to move the end effector to a desired position and orientation. In order to find the geometric relationships between the elements of a robot manipulator, it is necessary to define the kinematics of the robot. We present a synthesis of the kinematical model of the pose for this type of robot using the conformal geometric algebra framework. In addition, two controllers are developed, one for the position tracking problem and another for the orientation tracking problem, both using an error feedback controller. The stability analysis is carried out for both controllers, and their application to a 6-DOF serial manipulator and the legs of a biped robot are presented. By proposing the error feedback and Lyapunov functions in terms of geometric algebra, we are opening a new venue of research in control of manipulators and robot legs that involves the use of geometric primitives, such as lines, circles, planes, spheres.
Article
Multi-channel EEG data are usually necessary for spatial pattern identification in motor imagery (MI)-based brain computer interfaces (BCIs). To some extent, signals from some channels containing redundant information and noise may degrade BCI performance. We assume that the channels related to MI should contain common information when participants are executing the MI tasks. Based on this hypothesis, a correlation-based channel selection (CCS) method is proposed to select the channels that contained more correlated information in this study. The aim is to improve the classification performance of MI-based BCIs. Furthermore, a novel regularized common spatial pattern (RCSP) method is used to extract effective features. Finally, a support vector machine (SVM) classifier with the Radial Basis Function (RBF) kernel is trained to accurately identify the MI tasks. An experimental study is implemented on three public EEG datasets (BCI competition IV dataset 1, BCI competition III dataset IVa and BCI competition III dataset IIIa) to validate the effectiveness of the proposed methods. The results show that the CCS algorithm obtained superior classification accuracy (78% versus 56.4% for dataset1, 86.6% versus 76.5% for dataset 2 and 91.3% versus 85.1% for dataset 3) compared to the algorithm using all channels (AC), when CSP is used to extract the features. Furthermore, RCSP could further improve the classification accuracy (81.6% for dataset1, 87.4% for dataset2 and 91.9% for dataset 3), when CCS is used to select the channels.