ArticlePDF Available

Mind your thoughts: BCI using single EEG electrode

Wiley
IET Cyber-Physical Systems: Theory & Applications
Authors:

Abstract and Figures

These days, the Internet of things (IoT) research is driving large-scale development and deployment of many innovative applications. IoT has indeed brought many smart applications to the doorstep of users. IoT has also made it possible to connect many sensors and control equipment. Here, the authors address an important application for physically challenged. The authors present a brain–computer interface (BCI) system to lock/unlock a wheelchair and control its movements using BCI. The approach presented here uses NeuroSky's MindWave Mobile, a single electrode electroencephalography (EEG) headset that can be connected to any Bluetooth-enabled system. The raw EEG data from the headset is processed on an Android mobile device to extract the electromyography (EMG) patterns that occur due to eye blinks and activity of muscles in the jaw. These patterns are used to control the movement of a wheelchair in all possible directions. A biometric security system is provided to lock and unlock the wheelchair by extracting the information about different brain waves from the raw EEG signal. In this system, only the user knows the password which is generated using brain waves and it can lock/unlock the wheelchair and control it. The proposed system was verified and evaluated using a prototype.
This content is subject to copyright. Terms and conditions apply.
IET Cyber-Physical Systems: Theory & Applications
Research Article
Mind your thoughts: BCI using single EEG
electrode
ISSN 2398-3396
Received on 25th August 2018
Revised 12th November 2018
Accepted on 22nd November 2018
E-First on 15th February 2019
doi: 10.1049/iet-cps.2018.5059
www.ietdl.org
Sujay Narayana1 , RangaRao Venkatesha Prasad1, Kevin Warmerdam1
1Embedded Software Department, Delft University of Technology, The Netherlands
E-mail: sujay.narayana@tudelft.nl
Abstract: These days, the Internet of things (IoT) research is driving large-scale development and deployment of many
innovative applications. IoT has indeed brought many smart applications to the doorstep of users. IoT has also made it possible
to connect many sensors and control equipment. Here, the authors address an important application for physically challenged.
The authors present a brain–computer interface (BCI) system to lock/unlock a wheelchair and control its movements using BCI.
The approach presented here uses NeuroSky's MindWave Mobile, a single electrode electroencephalography (EEG) headset
that can be connected to any Bluetooth-enabled system. The raw EEG data from the headset is processed on an Android
mobile device to extract the electromyography (EMG) patterns that occur due to eye blinks and activity of muscles in the jaw.
These patterns are used to control the movement of a wheelchair in all possible directions. A biometric security system is
provided to lock and unlock the wheelchair by extracting the information about different brain waves from the raw EEG signal. In
this system, only the user knows the password which is generated using brain waves and it can lock/unlock the wheelchair and
control it. The proposed system was verified and evaluated using a prototype.
1Introduction
In the recent years, brain–computer interface (BCI) has become
more accessible to communicate with machines. Usually,
electroencephalography (EEG) is used in BCI. Conventional EEG-
based systems require a network of large set of electrodes,
monitoring multiple points around the head. In particular, with a
large number of electrodes, hitherto, the main focus of BCI is
related to medical research, where brain activities are studied to
diagnose the mental diseases. It is also used as diagnostic, and to
some extent, in curing too. BCI technology has shed light on many
medical support systems where a paralysed person can move using
brain-controlled wheelchairs. BCIs are used in treating several
neurological or psychiatric diseases such as Alzheimer's disease,
where patients in the advanced stages lose the ability to
communicate verbally. BCI systems allow these patients to convey
their basic thoughts and emotions in terms of ‘yes’ or ‘no’.
Industries like Intel are exploring innovative ways to develop and
introduce new BCI systems that can be a boon to the future society.
Apart from medical applications, some prototypes have
demonstrated the possibility of introducing BCI in the area of
entertainment. This approach includes applications like BCI
‘painting’ system where a physically disabled patient can get the
pleasure of painting without using hands [1]. BCI games have also
been used to stimulate mental exercises. In the context of Internet
of things (IoT), there are a few simple applications, such as
identifying brain activity when an object is touched [2], game play
[3], robot navigation [4, 5], child educator [6], personal career plan
[7], control of prosthetics, vision analysis etc. Companies like
NeuroSky, Emotiv, and others offer developer tools and headsets
which can be wirelessly connected. However, these devices use
single dry sensor compared to that of medical systems, which
predominantly use multiple electrodes with wet sensors for
precision measurements.
For large-scale use of BCI systems in daily life, it is important
to have sensors that are simple to use. Here, the first challenge is to
get useful command and control with fewer sensors (preferably
single) unlike the multiple wet electrodes used in medical systems.
While a single electrode and its placement limit the potential
applications of such devices, they are inexpensive and offers ease
of use. For complex analysis of the different classes of neural
oscillations, it should be noted that movements in the head such as
blinking are strongly detectable through such a sensor along with
EEG signals. The responsiveness of the signals originating from
muscular movements, called electromyography (EMG), is superior.
Within the broader system, the latter can be used to control the
wheelchair, while the former can be used for biometric security.
The voltage level of electrical signals generated by brain cells is
very less compared to that of muscle cells. Hence, the muscle
movement such as eye blinking, tensing the jaw etc. generates
signals with higher voltage when compared with that of brain cells.
Yet, in both the cases, the output signal is in the range of few
microvolts. Therefore, the output from EEG sensors is amplified to
boost the signal and to convert them to digital; however, in the
process, the noise is also equally amplified. Consequently, the
traditional method of identifying the peak amplitudes in the EEG
signal [8] is not fruitful if the muscular activity is to be detected.
Now, the second challenge is to get the meaning out of such
signals.
The approach presented in this paper goes beyond the
conventional method of detecting muscular activity such as jaw
tension or eye blinks by peak detection. We propose novel signal
conditioning methods to detect multiple activities (in turn they can
be used as commands). This work proposes a method of
interpreting the EEG signal produced by an off-the-shelf BCI
headset. In our experiments, we used NeuroSky's MindWave
Mobile [9], which has a single electrode. This headset is
comparatively inexpensive and communicates wirelessly via
Bluetooth. The third challenge is to make this system universally
usable, i.e. to make it lightweight so as to execute on a mobile
device. It also makes it easy to connect to the Internet. Once the
signals are interpreted, it can be used to control any other
connected devices. As an example, we demonstrate both controls
of a miniature electric wheelchair and also we secure it from being
used by others. The contributions of this work include:
i. We propose lightweight signal processing algorithms to get the
meaning out of raw EEG data rather than signals derived from
the EEG headset.
ii. We blend EEG and EMG signals and demonstrate controlling
of different devices and applications.
IET Cyber-Phys. Syst., Theory Appl., 2019, Vol. 4 Iss. 2, pp. 164-172
This is an open access article published by the IET under the Creative Commons Attribution-NoDerivs License
(http://creativecommons.org/licenses/by-nd/3.0/)
164
iii. We propose a method for identifying mental tasks, where an
individual is in attentive state, meditation state and neutral
states (relaxed state, not doing any work) are distinguished.
iv. A biometric identification system is proposed using EEG
signals to lock and unlock the wheelchair to secure it from
being used by others.
v. The movement control of a wheelchair in all possible
directions is accomplished by interpreting jaw tension and eye
blinking through the analysis of EMG signals.
An important aspect is that the signals are of very low
frequency in terms of a few hertz. Therefore, the task of processing
such signals is highly challenging.
The rest of this article is structured as follows. In Section 2, we
provide state of the art in the field of BCI. In Section 3, we explain
the basic principle behind the operation of BCI and different
classifications of brain waves. In Section 4, novel methods are
presented for the extraction of information from EEG and EMG
signals in order to distinguish different commands. Experimental
set-up and the evaluation of the system are described in Sections 5
and 6. Finally, we conclude in Section 7.
2Related work
Online classification of two mental tasks using a support vector
machine (SVM)-based BCI system is dealt in [10]. In this
approach, signals are registered using 16 electrodes. They are
differentiated using an SVM-based classifier in order to control a
robotic arm in a two-dimensional space. Hal et al. [11] presented a
real-time sleep detection system using a NeuroSky MindSet. Here,
the EEG signal from a single, dry sensor is filtered into different
EEG frequency bands and analysed to indicate the onset of sleep.
An EEG-based eye blink detection system for BCI is reported in
[8] by identifying the voltage peaks which are generated during eye
blink. Campisi and Rocca [12] address several techniques for an
automatic user recognition system based on the analysis of brain
activity in real-life applications.
Huang et al. [13] proposed the fastest BCI called ‘fast Phonics-
to-Chinese-Character system’ for individuals to write Chinese
characters using his/her brain waves. They developed a novel
spelling system for writing Chinese characters using both P300 and
N200 spelling systems. BCI with EEG signals for automatic vowel
recognition based on articulation mode is implemented in [14]. The
approach consists of using brain signals while thinking of a
specific task corresponding to vowels. A BCI classifier for
wheelchair commands using the neural network with fuzzy particle
swarm optimisation is dealt in [15]. They present a classification of
three mental tasks based on BCI that uses the Hilbert–Huang
transformation for extracting features from brain waves. These
features are then used to control a wheelchair and composing
words. A design of BCI system for piloting a wheelchair using five
classes of motor imagery-based EEG is presented in [16]. Where
EEG signals are analysed using wavelet transform and then
features are extracted. These features are classified using SVM to
control the directional movement of a wheelchair. A navigation
system using brain waves to control a robot is addressed in [4].
Here, the system is tested with two headsets, one from NeuroSky
and one from Emotiv. However, they do not process the raw signals
from the headsets. Instead, they use the processed signals provided
by these prototypes. Eid and Fernandes [17] present a BCI system
that reminds readers when they are not focusing on the text. They
use NeuroSky MindWave headset to measure the attention level of
readers and combine the measurement with visual-based
information. They make use of the processed EEG signal values for
reading attention and meditation levels provided by the headset.
As we see in the literature, more focus is on BCI systems using
multiple electrodes. Many of the single electrode experiments
reported to use NeuroSky headset, but they all use the processed
attention and meditation levels provided by the vendor. The work
presented in this paper goes beyond the conventional use of
NeuroSky MindWave headset by exploiting the raw EEG signals
from the electrode instead of using the blink details or the attention
and meditation levels given directly by the headset. Moreover, we
demonstrate how a BCI system can be used in the field of
biometric security, a scarcely represented topic in the BCI-related
literature. In the subsequent section, we describe BCI and the
methodology used.
3Neural oscillation
The brain communicates with different parts of the body by
sending electrical signals via neurons. Hence, there is a
concentration of electromagnetic signals in and around the brain.
The amplitude of these signals is very low (a few microvolts), yet
measurable. The EEG sensors placed on the head can detect
different electrical signals corresponding to the brain activity.
There are two types of EEG electrodes, viz, invasive – where the
sensors are placed inside the skull by surgery to measure the
electrical impulses; and non-invasive where the electrodes are
placed on the head and touch the scalp to measure the signals. Non-
invasive electrodes produce poor signal resolution as the skull
dampens electromagnetic signals. Non-invasive type electrodes are
further classified into active (wet) electrodes and passive (dry)
electrodes. Wet electrodes make use of a conducting gel applied
between the scalp and the electrode for a better conduction of
signals. Dry electrodes are worn without this, similar to the way a
hair band or a cap is worn.
3.1 Electrode placement
For analysing the brain's activities, the brain is divided into four
parts or lobes and each of them is responsible for a specific task
performed by the brain. The frontal lobe is responsible for making
judgements, like controlling some motor functions. The parietal
(top) lobe is involved in sensory functions, like visual attention.
The occipital (back) lobe controls the interpretation of the vision.
Lastly, the temporal (bottom middle) lobe governs hearing, smell,
and short-term memory. When a certain task is performed by the
brain, the electrical signals related to respective tasks are
concentrated in the corresponding lobe. To get consistent readings
from a specific region of the head for a specific task, a standard
‘10–20 system’ is used, where non-invasive electrodes are placed
accurately. The 10–20 system is an internationally recognised
system that specifies the physical distances between the adjacent
electrodes placed on the head. The numbers 10 and 20 refer to
distances used in terms of percentage of the total distance between
the nasion and inion as shown in Fig. 1. Here, the markers F, Fp,
C, T, and O stand for frontal, pre-frontal, central, parietal,
temporal, and occipital positions, respectively. The other electrodes
are placed at similar fractional distances. NeuroSky's MindWave
Mobile uses a passive type EEG sensor with one electrode that
rests on the forehead above the left eye, or the Fp1 position, to
measure the signal from the frontal lobe. The MindWave headset is
shown in Fig. 2. The ground reference for the electrode is taken
from the A1 position using an ear clip.
3.2 Distinct waves
Neural oscillations measured by EEG can be characterised by their
frequency, amplitude, and phase. These properties change when the
tasks performed by the brain change. Neural waves are most
generally classified as [18, 19]; we reproduce them for the sake of
Fig. 1 Ten to 20 electrode placement system (http://
commons.wikimedia.org/wiki/
File:21_electrodes_of_International_10-20_system_for_EEG.svg)
IET Cyber-Phys. Syst., Theory Appl., 2019, Vol. 4 Iss. 2, pp. 164-172
This is an open access article published by the IET under the Creative Commons Attribution-NoDerivs License
(http://creativecommons.org/licenses/by-nd/3.0/)
165
completeness: (i) delta waves (1–4 Hz) are a low-frequency signals
generated in the brain when the person is in deep dreamless sleep;
(ii) theta waves (4–8 Hz) can be seen during dreams and in deep
state of meditation; (iii) alpha waves (8–13 Hz) are dominant in a
relaxed state like visualisation and meditation; (iv) beta waves (13–
30 Hz) are generated usually in working state, especially when the
person is attentive, applying logic, anxious, or stressed; and (v)
gamma waves (30–70 Hz) are high-frequency waves in the EEG
signal. They can be seen in states of peak performance, both
physical and mental, during high focus and concentration. These
signal properties can be extracted from neural recordings using
frequency-domain analysis. Activities in the brain can be detected
from wave patterns in the corresponding frequency bands.
4Signal processing
In our work, we concentrate on the meditative and attentive states
of the brain, identified using alpha and beta waves, respectively. By
switching between these states, a biometric password is created
that can be used to lock/unlock an electronic system. Once the
system in unlocked, we make use of EMG signals created by eye
blinks and jaw tensions to command (such as moving a wheel chair
front, back) the system. In this section, we first explain how the
EEG pattern is recognised and how the different states (alpha and
beta waves) are classified, and then EMG pattern recognition for
eye blink and jaw tension detection.
4.1 EEG pattern recognition
The first step in recognising a brain state is to acquire the signal
from the sensor and filter the noise. Then, the frequencies that are
out of band of the frequency of interest need to be eliminated. For
example, to identify the meditation state, all the frequencies in the
signal that are not corresponding to alpha waves are filtered.
Finally, the magnitude of either alpha waves while in a resting/
meditative state or beta waves in an attentive/focused state are
differentiated from a neutral state of the brain. To analyse a state, a
band-pass filter is used to isolate the frequency range appropriate
for either alpha waves or beta waves. We used sixth-order
Butterworth filters, corresponding to
ω=A B ,
(1)
where ω contains the cut-off frequencies of the filter, A and B are
the individual cut-off frequencies, normalised to the Nyquist
frequency ( fN= 256 Hz). Hence, the normalised lower cut-off
frequency for the filter can be obtained by fL/fN, where fL is the
lower cut-off frequency. Similarly, the normalised higher cut-off
frequency can be obtained by fH/fN, where fH is the higher cut-off
frequency. This leads to
ωα= 0.03125 0.05468
(2)
ωβ= 0.05468 0.078125 ,
(3)
Respectively, for the alpha and beta waves. The frequency band
using ωα includes fL= 8 to fH= 14 Hz and the frequency band
using ωβ includes fL= 14 Hz to fH= 20 Hz. Since it is the
intention to use this for biometric security, users switch between
these states in a distinct pattern. The transitions between them are
included in the measurements. In a meditative state, eyes are
closed, breathing is slowed down, and a single thought is sustained.
During the attentive state, some new information is read or some
mathematical problems are solved. The neutral state consists of
opened eyes but otherwise no exertion.
In Fig. 3, the difference in alpha activity between the meditative
state and the neutral state is shown. This plot results from a
continuous measurement of 40 s which includes the transition
between the states halfway through. We observe in the figure that
the amplitude of the brain signal in meditative state is high when
compared with that in neutral state. Similarly, beta activity during
the transition between the neutral state and the attentive state is
shown in Fig. 4. The output signal from the EEG sensor has been
smoothed-out using a moving average filter with a span of 100
samples. Given these outputs, the state can be identified, using the
history of the time-domain signal. After filtering and
transformation into the frequency power spectrum, the states are
identified by magnitude within the two ranges.
For an individual, the magnitude of alpha and brain waves in
the neutral state has almost the same magnitude over time
compared to meditative or attentive states, respectively. In both
Figs. 3 and 4, the magnitude of alpha and beta waves are almost
equal to 500 over the time period. This corresponds to the neutral
state. The magnitude of alpha wave in the meditative state has a
peak reaching up to 2000 and beta wave up to 800 during attention.
This magnitude holds true only for a particular individual where
NeuroSky electrode is placed in the Fp1 location of his/her head.
This threshold may vary for different users with an electrode
placed in the same location or for the same user with an electrode
at different locations on the forehead. Considering the neutral
condition (where the individual is in relaxed state, not doing any
work) as a reference, the magnitude of alpha and beta waves during
meditative and attentive states are identified to differentiate
between these two mental states. A combination of alpha and beta
waves recorded over specific time can be used to generate a
biometric password. Several subwindows are considered, wherein
each window, the user has to perform a specific task to induce
alpha or beta waves at high concentration. Fig. 5 shows a sample
pattern with a total time of 23 s which includes four subwindows
with 5, 5, 8, and 10 s in series, respectively. For the first 5 s, alpha
waves in high concentration are induced by meditation. For the
next 5 s, the user is not doing any task and is being neutral. The last
two windows include high-amplitude beta waves and a neutral
state, stimulated by being attentive and neutral, respectively. This
Fig. 2 Neurosky mindwave mobile EEG headset
Fig. 3 Alpha activity, transitioning from meditative to neutral state
Fig. 4 Beta activity, transitioning from neutral to attentive state
166 IET Cyber-Phys. Syst., Theory Appl., 2019, Vol. 4 Iss. 2, pp. 164-172
This is an open access article published by the IET under the Creative Commons Attribution-NoDerivs License
(http://creativecommons.org/licenses/by-nd/3.0/)
whole pattern recorded for 23 s will form the biometric password
that can be used to secure a system. The system can be unlocked
only if the same pattern is generated which is only possible by the
rightful user who knows the password. This pattern is more
difficult to replicate by anyone other than its creator. Though we
consider a sample window with a total duration of 23 s, the
duration of each subwindow, hence, the total lock/unlocking time
can be varied. However, this depends on the capability of the user
to switch between different mental tasks in the specific time period.
This uniqueness is also an advantage in terms of security as the
lock pattern cannot be replicated by another user easily.
Once the password pattern is generated for a specific duration,
it is stored in the Android device securely along with the length of
each subwindows. On the fly when a user wants to lock/unlock the
system, the EEG signal from the sensor is acquired. As soon as the
acquisition time matches with the first window of the password, the
mental state in the acquired subwindow is identified comparing it
with the neutral state which is considered as a reference. If a match
is found, then the same process is repeated for subsequent
subwindows until all the subwindows provide matching between
the password pattern and acquired signal. The magnitude of
samples in each subwindow can be found as below
St=
i=1
n
ft[i],
(4)
where St is the sum of the magnitude of samples in the window t
and n the number of samples in the fast Fourier transform (FFT) of
acquired signal, f[i], in the subwindow t.
If Ts is the threshold for state s (attention or meditation)
calculated with reference to neutral state for the user, if StTs,
then the mental task performed in subwindow t is s. In this way,
every task in each subwindow of the password has to be compared
with that of acquired signal to get a match. On the perfect match,
the system is unlocked.
In this experiment, we considered only alpha waves and beta
waves as they can be measured in the frontal lobe efficiently during
meditation and attention. Moreover, single electrode sensor limits
the potential to measure other neural oscillations. When a greater
number of electrodes and more types of brain waves are
considered, multiple secured passwords with different
combinations of varying subwindows could be generated in a
shorter period.
4.2 EMG pattern recognition
As an alternative to attempting to read the EEG signals, the
electrical potentials associated with EMG, or the activity of
muscles, can be analysed with the same sensor. In our experiments,
two scenarios exploiting the disruptive signal provided by muscle
movements in the head are used control signals.
In the EMG measurements from the MindWave Mobile headset,
blinking of the eyes causes a waveform of a single period with
significant amplitude which can be seen in Fig. 6a. Whereas
blinking gives this single, instantaneous event, a continuous signal
can be achieved by tensing of muscles. A strong example of this
follows from clenching the jaw, as can be seen in Fig. 6b. The
muscle tension causes a continuous signal composed of a broad
range of frequencies that are different from the blinking.
4.2.1 Blink detection: The EEG signal pattern generated while
blinking is different for different people. Even though a single
pulse is generated for every blink, the amplitude, frequency, and
phase of the pulse can vary for different persons. Hence, it is not
efficient to assign a common threshold to these signal parameters
to detect blinking. Owing to the distinct shape of the influence of
blinking on the EEG, a correlation function is used to identify
blinking. A stored pattern of blinking for a particular user, 250
samples in length, is constantly compared with the filtered previous
250 samples received from the headset. This is done according to
C=
i= 1
250
P[i]D[i],
(5)
where C is the current measure of correlation, P a vector
containing the stored pattern of blinking, and D a vector consisting
of filtered last 250 received samples of data. Specifically, we use
filter
D[n]=c1S+c2D[n 1],
(6)
where again D is a vector containing the stored data, S is the newly
received sample, and c1 and c2 are constants determining the weight
of new samples. Values of c1= 0.1 and c2= 0.9 were found
empirically to get the desired low-pass behaviour. To illustrate this,
Fig. 7 shows the filtering applied to the raw signal. The stored
blinking pattern used in the correlation function and the measure of
correlation between the stored pattern and the filtered data while
blinking is shown in Fig. 8.
Hysteresis thresholds within the correlation result in (5) then
determine when a blink has occurred. To further modulate
information with blinking, consecutive blinks within 1 s of each
other are counted. This causes a delay in the output of the signal
equal to the allowed time between blinks in exchange for the
ability to differentiate between different blinking patterns.
4.2.2 Jaw tension detection: The influence of muscle tension in
the jaw is most distinctly characterised by the broad range of
frequencies which begin to occur. If the average of frequencies in
this range within the frequency domain is taken, however,
interference such as blinking can still result in a higher magnitude
Fig. 5 Subwindow
Fig. 6 Signal processing
(a) EEG output with blinking, (b) EEG output with jaw tension
IET Cyber-Phys. Syst., Theory Appl., 2019, Vol. 4 Iss. 2, pp. 164-172
This is an open access article published by the IET under the Creative Commons Attribution-NoDerivs License
(http://creativecommons.org/licenses/by-nd/3.0/)
167
than tensing the jaw. Moreover, as can be seen in Fig. 6b, the
magnitude is discontinuous over this range of frequencies.
To identify jaw tension, an algorithm is devised which
differentiates solely based on the range of frequencies, within a
margin of many discontinuities in magnitude over that range. The
corresponding pseudocode is shown in Algorithm 1 (see Fig. 9). In
the frequency domain, power spectrum of ten evenly spaced
regions are defined. This width must be enough to overcome the
discontinuity of the spectrum. Within each region, a threshold of
magnitude is checked. Then, by the number of regions exhibiting a
signal, jaw tension is detected. In the given algorithm, a frequency
range of 20–90 Hz, a choice of ten regions, an inner threshold of
15, an outer threshold of 0, and a hysteresis margin of 4 result in a
sufficiently robust detection scheme.
5Implementation
In this section, we explain our experimental set-up and different
modules involved in the set-up.
5.1 Experimental set-up
To analyse the EEG signals, the raw data obtained from the
electrode is sampled at 512 Hz and sent over a Bluetooth
connection to a mobile device, which a patient could easily carry
around while operating the wheelchair. Here, the Android
operating system is used to process the raw data. An application is
developed to process the signal and output commands to the
wheelchair to control its movement. This last step of
communication is through Wi-Fi. The wheelchair is represented by
a small robot vehicle. It includes a Wi-Fi chip to interpret
commands and has two motorised wheels, just as a wheelchair
would have. This experimental set-up is used to analyse the
different brain waves and its properties so that a prototype can be
built.
We built a prototype consisting of a robotic vehicle to imitate a
wheelchair and a biometric security system to demonstrate how
BCI can be used in security applications. With this prototype, we
demonstrate:
moving a wheelchair in all directions using EMG signals (blink
detection and jaw tension detection),
a biometric security system to lock and unlock the wheelchair
using EEG signals (alpha and beta waves) so that the movement
can be controlled later.
We provide implementation and details of testing the proposed
idea with a prototype as follows:
i. Wheel chair movement: The prototype consists of a server
program running on a PC to which an Android mobile and the
wheelchair are connected. We use a two-wheeled robotic
vehicle that acts as a wheelchair as shown in Fig. 10. The
NeuroSky MindWave Mobile is connected to the mobile via
Bluetooth as the signal processing takes place on the mobile
device for controlling the movement of the wheelchair. The
raw EEG data received on the Android mobile from the EEG
headset is processed for blink and jaw tension detection as
described in Section 4.2. Creating tension in jaw corresponds
to forward movement of the wheelchair. Two fast eye blinks
and three fast blinks correspond to left and right turning of the
wheelchair, respectively. A blinking pattern for the backward
movement was not created, as this can be achieved with the
combination of left/right and forward movement. Relaxing the
jaw makes the wheelchair stop moving forward.
ii. Biometric security: To lock or unlock the wheelchair using
brain waves, the raw data acquired is processed in the Android
device to extract the information from alpha and beta waves to
identify the task performed by the user as either attention or
meditation. The wheelchair is secured with a password. This
password is a combination of mental tasks performed by the
user, generated using the information obtained from alpha and
beta waves, as explained in Section 4.1. The wheelchair is
unlocked and allowed to control the movement only if the user
is able to generate the pattern by performing mental tasks,
which is same as the password pattern.
Fig. 7 Raw and filtered data through blinking
Fig. 8 Stored pattern and correlation function through blinking
Fig. 9 Algorithm 1: Jaw tension detection
Fig. 10 Prototype of the wheel chair
168 IET Cyber-Phys. Syst., Theory Appl., 2019, Vol. 4 Iss. 2, pp. 164-172
This is an open access article published by the IET under the Creative Commons Attribution-NoDerivs License
(http://creativecommons.org/licenses/by-nd/3.0/)
5.2 Components
The components of the set-up are described below.
5.2.1 Server: Although the server is not required in the case of a
one-to-one Wi-Fi connection from the Android mobile to the
wheelchair directly, a centralised server system running on a PC is
developed, which acts as a mediator to establish a connection
between the mobile device and the wheelchair. Any number of
other devices and applications can be connected to this centralised
server to perform specific predefined tasks depending on the
mental or muscular effort performed by a user. On startup, the
server will be waiting for the devices to connect. There is no
predefined order for any of these devices to get connected to the
server as the identification of devices takes place internally. Once
both the mobile device and wheelchair are connected to the server,
the wheelchair movement is controlled by blinking and creating
tension in the jaw.
5.2.2 Mobile application: An Android mobile application was
developed to which the NeuroSky MindWave headset can be
connected via Bluetooth. The complete signal processing algorithm
for blink and jaw tension detection is run by this app to control the
wheelchair. The existing jTransforms library is used to calculate
the FFT of the raw EEG signals on Android mobile. Once the
signals are processed, the statuses of blinking and jaw tension
events are sent to the server. The server sends corresponding
commands to the wheelchair as the front, left, right, and stop for
the tasks jaw tension, two blinks, three blinks, and jaw relaxation,
respectively.
5.2.3 Wheelchair: The wheelchair is represented by a two-
wheeled robotic vehicle that was assembled from scratch. It
consists of two motors fitted to the wheels, a motor driving circuit
to drive the motors in forward and reverse directions, an ARM
Cortex M3 microcontroller (LPC1768) whose purpose is to
communicate to the server via TI's CC3000 Wi-Fi module and to
send commands to the motor driver. Fig. 11 shows the different
modules present in the wheelchair.
The screenshot of log messages displayed by the server
application and the Android mobile application is shown in Fig. 12.
6Results and evaluation
Using the prototype, the EMG and EEG pattern detection schemes
were tested. Based on these results, the proposed approach is
evaluated.
6.1 EMG detection analysis
The movements of the wheelchair for EMG detection are tested for
accuracy with multiple users. Randomly, ten people were asked to
wear the NeuroSky MindWave headset and control the movement
of a wheelchair in all directions. Before starting the test, each
person's blink pattern was recorded using the headset and stored on
the Android mobile. This pattern could then be used by the
application to correlate with the raw EEG signals during online eye
blink detection. It was observed that the amplitude threshold set for
jaw tension detection for one user did not provide accurate results
for the next user. Hence, for each person, different thresholds were
set in the code for jaw tension detection. During testing, the actual
action performed by every user and the resulting movement of the
wheelchair is noted in each trial. The matching between blink and
jaw tension patterns and the actual movement is done as follows:
Front movement: Creating tension in the jaw.
Stop moving: Releasing the tension on the jaw.
Left turn: Blinking both the eyes twice, with no more than 1 s
between blinks.
Right turn: Blinking both the eyes thrice, with no more than 1 s
between blinks.
Table 1 shows the confusion matrix obtained for an intended
action performed and the observed wheelchair movement. This
matrix was generated by considering the average of all the
movements commanded by all ten persons. Totally, there were 110
movements commanded by 10 people. The movement along the
rows specifies the action performed by the user and column
indicates the movement of the wheelchair. From the table, it is
evident that the accuracy is >86% since the non-principal diagonal
elements are >0.86. The most accurate command was ‘Stop’ as this
could happen only when there is a transition from tensed to
released state of the jaw. The forward movement of the wheelchair
is 96% accurate where 2% is lost to the action ‘Left’ and rest 2%
for ‘No task’. This is because, for jaw tension created by some of
the users, the transition triggered one blink and two blinks which
Fig. 11 Modules of the wheel chair
Fig. 12 Log messages displayed on server and Android application
IET Cyber-Phys. Syst., Theory Appl., 2019, Vol. 4 Iss. 2, pp. 164-172
This is an open access article published by the IET under the Creative Commons Attribution-NoDerivs License
(http://creativecommons.org/licenses/by-nd/3.0/)
169
corresponded to ‘No action’ and ‘Left’, respectively. The Left
movement is 88% accurate. Higher percentage for ‘Left’ is lost to
‘No task’ as out of two blinks performed to command ‘Left’, only
one blink was detected sometimes which matched ‘No task’.
Similarly, the three blinks were detected as two blinks by the
Android application sometimes for ‘Right’ movement which
resulted in 86% accuracy.
The accuracy of pattern recognition by the application including
both blink and jaw tension for tasks performed by each user is
represented in a bar chart as shown in Fig. 13. The accuracy for
each user is calculated as, the number of trials for which correct
movement observed for corresponding user action over total
number of trials. The maximum accuracy achieved is 96.3%.
6.2 EEG detection analysis
The EEG pattern recognition approach was tested using the same
wheelchair where users were asked to lock/unlock it. The prototype
was evaluated with the help of ten people. Each person was asked
to create a password to secure the wheelchair, by performing
mental tasks: meditation, attention, and a neutral state randomly for
a specific amount of time. The combination of these states was left
to the user. The raw EEG signals from each user were stored on a
PC when these actions were performed. The individuals were also
asked to set the duration of each subwindow >5 s. For instance, a
random task set containing four subwindows can be: meditating for
5 s, being neutral for 5 s, meditating for 8 s, and finally being
attentive for 10 s. This forms the password: meditate–neutral–
meditate–attention. The number of subwindows and subwindow
durations had to be memorised by the user so that he/she can
generate the same pattern (password) again. Though the way of
performing different tasks such as attention and meditation can be
chosen by the user, for experimental purposes, they were provided
with some specific tasks. Each user meditated in his/her own way
by closing their eyes to induce more alpha waves. To stimulate
high concentration of beta waves, they were given with
mathematical equation to solve (attention), asked them to read
some texts or to solve some puzzles. The password thus stored for
all ten users formed training data and the wheelchair was locked
using the respective password for the corresponding user, during
the test.
While testing, the individuals were asked to induce the same
EEG pattern again to unlock the wheelchair with their unique
password. Each user was asked to unlock the wheelchair ten times.
Since it was difficult to generate EEG pattern all the times in a
single trial, they were provided with five trials to unlock the file. If
the wheelchair was unlocked within these five trials, then it was
considered a success; else it was considered a failure. Hence, every
person had to unlock the wheelchair ten times with five trials
provided for each unlocking session. The success rate (SR) and
false unlocking rate (FUR) for each unlocking trial can be
described as follows:
SR = Number of times the wheelchair was unlocked
Total number of unlocking sessions
(7)
FUR = Number of times a user was not able to unlock
Total number of unlocking sessions
(8)
The SR and FUR obtained with ten users are shown in Figs. 14 and
15, respectively. When few users were asked to perform specific
mental tasks for different time intervals (subwindow), they had to
count the time so that the mental state can be changed then. Since
all tasks such as meditation, attention, including counting the time
were controlled by the brain, the meditation and attention tasks
were affected by time counting. This resulted in almost equal
amplitude levels by both alpha and beta frequencies as there was
mix up of tasks in the brain, generating irregular patterns, both for
password and during testing. In order to eliminate the problem with
time counting, multiple alarm were set on the Android mobile to
beep at specific intervals of user's choice to indicate the user to
change his/her task.
On observing the obtained SR and FUR, we see that the
maximum SR achieved was 90% and the least FUR was 10%.
From these results, it can be concluded that when a person can
perform single task like meditation or attention without any
disturbance, higher SR can be achieved even when single BCI
electrode is used.
Fig. 16 shows the CDF plot of the number of trials each user
has taken to unlock the wheelchair successfully. It is evident from
the plot that all users were able to unlock the wheelchair within
provided five trials. The histogram of a number of trials is shown
in Fig. 17 which reveals that out of total 100 trials, the system was
unlocked 35 times in 1 trial, 37 times in 2 trials, and 19 times in 3
trials.
Table 1Confusion matrix obtained via blinking and jaw tension
Front Left Right Stop No task
front 0.96 0.02 0 0 0.02
left 0 0.88 0.02 0 0.1
right 0 0.11 0.86 0 0.03
stop 0 0 0 0.98 0.02
no task 0 0.06 0 0 0.94
Fig. 13 Accuracy of blink and jaw tension detection from each user
Fig. 14 Success rate
Fig. 15 False unlocking rate
170 IET Cyber-Phys. Syst., Theory Appl., 2019, Vol. 4 Iss. 2, pp. 164-172
This is an open access article published by the IET under the Creative Commons Attribution-NoDerivs License
(http://creativecommons.org/licenses/by-nd/3.0/)
In this experiment, NeuroSky Mindwave headset having single
electrode was used to capture brain waves from the frontal lobe.
After analysing the stored EEG patterns of all ten users, it was
observed that the number of subwindows, duration of each
subwindow, and the total length of password were different, hence
providing a secured password. This helped a lot in achieving low
FUR. Since the system is applicable in the field of security, the
FUR should be as low as possible, ideally equal to zero. The FUR
can be decreased and best security can be provided in several ways:
i. Multiple tasks (brain waves): Along with the tasks that induce
alpha and beta waves, if other tasks which stimulate theta,
delta, and gamma waves are considered in the proposed
approach, then the password will be stronger, decreasing the
FUR almost to zero. To achieve this, different sensors have to
be placed in multiple locations on the head as per 10–20
system.
ii. Multiple electrodes: The concentration of a specific brain wave
is higher in particular brain lobe when a certain task is
performed. For instance, the alpha waves have a higher
magnitude in the parietal lobe when the user is visually
attentive. The alpha waves can be detected in the occipital lobe
for the same task but will be with less magnitude. Hence for a
specific mental task, multiple electrodes have to be placed in
different parts of the brain to identify the presence of particular
brain wave accurately. Therefore, the SR can be increased and
the FUR can be decreased if multiple electrodes are used to
measure the brain waves from different parts of the brain.
Considering the combination of different subwindows, multiple
mental tasks (brain waves), and multiple electrodes, it is possible to
obtain a stronger secured password which is almost impossible to
break using brute force.
Fig. 18 shows the average SR from ten users at different
subwindows until 40 s. From the graph, we observe that the SR is
less for shorter subwindows such as 5 s. The reason for this is that
a person cannot easily switch between tasks like meditation and
attention in a short time. The value starts stabilising at around 10 s
after which the SR stays high. This 10 s is sufficient enough to
transit from neutral to meditative or attentive states and vice versa.
The subwindow duration depends on the type of task that a person
is performing and also his capability to switch to another task in
less time. Further, it should be considered that with longer
subwindows, there is a possibility that the SR might decrease as the
individual may not be able to stick to the particular task that he is
performing, for a longer duration.
For some of the users, the alpha waves (meditation) were
clearly distinguishable from being neutral. For the rest of the users,
the beta waves were dominant. Hence, the SR and the detection
possibility completely depend on the discipline of the individual.
7Conclusion
In the context of IoT, this work presented several novel
interconnections between humans and the IoT systems. EMG data
was measured with the NeuroSky WindWave Mobile headset and
interpreted on an Android device. The derived signals were
published over Wi-Fi to a server. This server, in turn, was made
capable of interpreting EEG data for the purposes of biometric
security. The server connected to a robot representing the
wheelchair. Commands forwarded to this robot were executed,
making it possible to control it with eye blinks and creating tension
on the jaw.
The algorithm currently implemented on the Android device for
the identification of a user based on EEG patterns can further be
converted to C code. This way, it can be executed on a
microcontroller instead, increasing the applicability of the security
implementation. In the present implementation, the blink pattern
for every user has to be stored beforehand so that it can be used for
correlation later. Moreover, the amplitude threshold set for jaw
tension detection varies for different individuals. We want to
eliminate this and make the system user-independent. A higher
success rate with smaller subwindows in the biometric security
system is desired. The usage of multiple electrodes is one avenue
for further work to achieve this. This would also give a higher
overall security and accuracy.
A video, demonstrating the prototype, can be found at https://
www.dropbox.com/s/s0u88fwosa4qxse/Video.mp4?dl=0.
8References
[1] Gurkok, H., Nijholt, A.: ‘Affective brain–computer interfaces for arts’. Proc.
Humaine Association Conf. on Affective Computing and Intelligent
Interaction, Geneva, 2013
[2] Erp, J.B.F., Brouwer, A.: ‘Touch-based brain computer interfaces: state of the
art’. Proc. IEEE Haptics Symp., Houston, TX, 2014
[3] Marshall, D., Coyle, D., Wilson, S., et al.: ‘Games, gameplay, and BCI: the
state of the art’, Proc. IEEE Trans. Comput. Intell. AI Games, 2013, 5, (2), pp.
82–99
[4] Vourvopoulos, A., Liarokapis, F.: ‘Robot navigation using brain-computer
interfaces’. Proc. IEEE 11th Int. Conf. on Trust, Security and Privacy in
Computing and Communications, Liverpool, 2012
[5] Leeb, R., Tonin, L., Rohm, M., et al.: ‘Towards independence: a BCI
telepresence robot for people with severe motor disabilities’, Proc. IEEE,
2015, 103, (6), pp. 969–982
[6] Lee, P.J., Chin, S.W.: ‘Early childhood educator assistant with brain computer
interface’. Proc. Int. Conf. on Software Intelligence Technologies and
Applications & Int. Conf. on Frontiers of Internet of Things 2014, Hsinchu,
2014
[7] Tan, H.H., Chin, S.W.: ‘Personal career planner via brain-computer interface’.
Proc. Int. Conf. on Software Intelligence Technologies and Applications &
Int. Conf. on Frontiers of Internet of Things 2014, Hsinchu, Taiwan, 2014
[8] Rihana, S., Damien, P., Moujaess, T.: ‘EEG-eye blink detection system for
brain computer interface’. Proc. Converging Clinical & Engineering.
Research on NR, BIOSYSROB, Springer, Berlin, Heidelberg, 2013
[9] NeuroSky: ‘MindWave datasheet EN’. NeuroSky Inc., Head Set Features +
Technical Specifications, 2011
Fig. 16 CDF plot of number of trials
Fig. 17 Histogram of number of trials
Fig. 18 Success rate with different subwindows
IET Cyber-Phys. Syst., Theory Appl., 2019, Vol. 4 Iss. 2, pp. 164-172
This is an open access article published by the IET under the Creative Commons Attribution-NoDerivs License
(http://creativecommons.org/licenses/by-nd/3.0/)
171
[10] Hortal, E., Ubeda, A., Ianez, E., et al.: ‘Online classification of two mental
tasks using a SVM-based BCI system’. Proc. 6th Annual Int. IEEE EMBS
Conf. on Neural Engineering, San Diego, California, 6–8 November 2013
[11] Hal, B.V., Rhodes, S., Dunne, B., et al.: ‘Low-cost EEG-based sleep
detection’. 36th Annual Int. Conf. of the IEEE Proc. Engineering in Medicine
and Biology Society (EMBC), Chicago, IL, 2014
[12] Campisi, P., Rocca, D.L.: ‘Brain waves for automatic biometric-based user
recognition’, Proc. IEEE Trans. Inf. Forensics Sec., 2014, 9, (5), pp. 782–800
[13] Huang, T.W., Tai, Y.H., Tian, Y.J., et al.: ‘The fastest BCI for writing Chinese
characters using brain waves’. Proc. Fourth Global Congress on Intelligent
Systems, Hong Kong, 2013
[14] Sarmiento, L.C., Cortes, C.J., Bacca, J.A., et al.: ‘Brain computer interface
(BCI) with EEG signals for automatic vowel recognition based on articulation
mode’. Biosignals and Biorobotics Conf. Proc. Biosignals and Robotics for
Better and Safer Living (BRC), Salvador, 2014
[15] Chai, R., Ling, S.H., Hunter, G.P., et al.: ‘Brain computer interface classifier
for wheelchair commands using neural network with fuzzy particle swarm
optimization’, Proc. IEEE J. Biomed. Health Inf., 2014, 18, (5), pp. 1614–
1624
[16] Reshmi, G., Amal, A.: ‘Design of a BCI system for piloting a wheelchair
using five class MI based EEG’. Proc. Third Int. Conf. on Advances in
Computing and Communications, Cochin, 2013
[17] Eid, M., Fernandes, A.: ‘Readgogo!: towards real-time notification on
readers’ state of attention’. Proc. XXIV Int. Conf. on Information,
Communication and Automation Technologies (ICAT), Sarajevo, 2013
[18] Zhuang, T., Zhao, H., Tang, Z.: ‘A study of brainwave entrainment based on
EEG brain dynamics’, Proc. Comput. Inf. Sci. J., 2009, 2, (2), pp. 80–86
[19] http://www.brainworksneurotherapy.com/what-are-brainwaves, 1 May 2015
172 IET Cyber-Phys. Syst., Theory Appl., 2019, Vol. 4 Iss. 2, pp. 164-172
This is an open access article published by the IET under the Creative Commons Attribution-NoDerivs License
(http://creativecommons.org/licenses/by-nd/3.0/)
... Utilizing portable systems greatly simplifies the process of recording electroencephalography (EEG) in both indoor and outdoor settings, particularly for applications involving brain-computer interfaces (BCI). For extended patient monitoring, prefrontal EEG recordings are predominantly favoured [1], and these find numerous applications [2][3][4][5][6]. Typically, recorded EEG data is susceptible to contamination from both physiological sources (such as cardiac and neural fluctuations) and non-physiological factors (like electrode pops, electrical shifts, and linear trend artifacts) [7,8]. ...
... The generation of simulated data follows the equation (5). ...
Article
Full-text available
The diagnosis of neurological disorders often involves analyzing EEG data, which can be contaminated by artifacts from eye movements or blinking (EOG). To improve the accuracy of EEG-based analysis, we propose a novel framework, VME-EFD, which combines Variational Mode Extraction (VME) and Empirical Fourier Decomposition (EFD) for effective EOG artifact removal. In this approach, the EEG signal is first decomposed by VME into two segments: the desired EEG signal and the EOG artifact. The EOG component is further processed by EFD, where decomposition levels are analyzed based on energy and skewness. The level with the highest energy and skewness, corresponding to the artifact, is discarded, while the remaining levels are reintegrated with the desired EEG. Simulations on both synthetic and real EEG datasets demonstrate that VME-EFD outperforms existing methods, with lower RRMSE (0.1358 versus 0.1557, 0.1823, 0.2079, 0.2748), lower ΔPSD in the α band (0.10 ± 0.01 and 0.17 ± 0.04 versus 0.89 ± 0.91 and 0.22 ± 0.19, 1.32 ± 0.23 and 1.10 ± 0.07, 2.86 ± 1.30 and 1.19 ± 0.07, 3.96 ± 0.56 and 2.42 ± 2.48), and higher correlation coefficient (CC: 0.9732 versus 0.9695, 0.9514, 0.8994, 0.8730). The framework effectively removes EOG artifacts and preserves critical EEG features, particularly in the α band, making it highly suitable for brain-computer interface (BCI) applications.
... For both categories, the classification has also been relatively simple and direct because the corresponding results have this theme in common. For example, for the category "authentication," Narayana et al. (2019) present an application for the physically challenged consisting of a biometric security system configured by a BCI, or Merrill (2019) designed a brain-based authentication system using custom-fit EEG earpieces. And for the "cyberattacks" category, the works by Bernal et al. (2020Bernal et al. ( , 2021Bernal et al. ( , 2022 regarding the BCI life cycle are excellent prototypical examples of this category. ...
... In this regard, three examples are shown next. Narayana et al. (2019) present an important application for the physically challenged consisting of a biometric security system configured by a BCI to lock/unlock a wheelchair and control its movements using these patterns that occur due to eye blinks and activity of muscles in the jaw. Alomari et al. (2017) propose that a practical EEG-based system could be developed to make it easier for users to select a password based on the prediction of its memorization at the time of its creation. ...
Article
Full-text available
With the recent increasing interest of researchers for Brain-Computer Interface (BCI), emerges a challenge for safety and security fields. Thus, the general objective of this research is to explore, from an engineering perspective, the trends and main research needs on the risks and applications of BCIs in safety and security fields. In addition, the specific objective is to explore the BCIs as an emerging risk. The method used consists of the sequential application of two phases. The first phase is carried out a scoping literature review. And with the second phase, the BCIs are analyzed as an emerging risk. With the first phase, thematic categories are analyzed. The categories are fatigue detection, safety control, and risk identification within the safety field. And within the security field are the categories cyberattacks and authentication. As a result, a trend is identified that considers the BCI as a source of risk and as a technology for risk prevention. Also, another trend based on the definitions and concepts of safety and security applied to BCIs is identified. Thus, "BCI safety" and "BCI security" are defined. The second phase proposes a general emerging risk framing of the BCI technology based on the qualitative results of type, level, and management strategies for emerging risk. These results define a framework for studying the safety and security of BCIs. In addition, there are two challenges. Firstly, to design techniques to assess the BCI risks. Secondly, probably more critical, to define the tolerability criteria of individual and social risk. Article link: https://authors.elsevier.com/sd/article/S0925-7535(22)00390-3
... CTRL labs developed a wristband that accurately maps finger actuation and hand positioning using electromyographic signals and accelerometer data (76). Several companies are working on developing novel EEG based BCI electrodes interpreting neural activity using artificial intelligence, allowing users to control objects with their minds (77,78). In a more recent study, a digital bridge between the brain and spinal cord was developed based on the brain-spine interface (BSI) implant to restore communication between the brain and the spinal cord, allowing individuals with chronic tetraplegia to walk naturally in community (79). ...
Article
Full-text available
In the relentless pursuit of precision medicine, the intersection of cutting-edge technology and healthcare has given rise to a transformative era. At the forefront of this revolution stands the burgeoning field of wearable and implantable biosensors, promising a paradigm shift in how we monitor, analyze, and tailor medical interventions. As these miniature marvels seamlessly integrate with the human body, they weave a tapestry of real-time health data, offering unprecedented insights into individual physiological landscapes. This log embarks on a journey into the realm of wearable and implantable biosensors, where the convergence of biology and technology heralds a new dawn in personalized healthcare. Here, we explore the intricate web of innovations, challenges, and the immense potential these bioelectronics sentinels hold in sculpting the future of precision medicine.
... The logical integration is done by using the intentional double blink coupled with a software interface. Kevin Warmerdam proposed a lightweight mechanism to extract meaningful information from a single-channel Neu-roSky headset rather than analyzing and processing all channels of raw data [35]. For controlling different devices and applications the system provided a fusion of EEG and Electromyography (EMG) signals, which are recorded from muscle movements, for accurate feature extraction. ...
Preprint
Full-text available
Brain-computer interfaces (BCI) provide a mobility solution for patients with various disabilities. However, BCI systems require further research to enhance their performance, while incorporating the physical and behavioral states of patients into the system. As the principal users of a BCI system, patients with disabilities are emotionally sensitive, so a BCI device that adaptively tunes toward patient’s psychological effects, could provide the foundation for refining the BCI applications. This paper focuses on the collection and realization of the electroencephalogram (EEG) signals data of humans, obtained as a response to different psychological effects of sound stimuli. Filtration and preprocessing of the dataset are achieved using the frequency-based distribution of EEG signals. Different machine learning tools and techniques are evaluated and applied to abstracted powerbands of psychological signals. The experimental results show that the proposed system predicts mental states with an average accuracy of 74.26\%. Moreover, an automated BCI system is developed to control an electric-powered wheelchair (EPW) while responding to the user’s mental state with a contingency mechanism. The results show that such a system could be designed to make BCI systems more reliable, safe, adaptable, and emotion-responding for sensitive paralytic patients. The system also shows a satisfactory True Positive Rate (TPR) and False Positive Rate (FPR) with an average time of 8.4 seconds to generate the interpretable brain signal from the user.
... From Appendix D, the number of EEG electrodes and their locations differ across the studies as they target different brain signals active in different regions of the brain. The MI signals are active over the motor cortex [64], [65] located in the central lobe close to the front region. This explains the use of those regions in most studies utilizing the MI paradigm. ...
Article
Full-text available
The use of brain signals in controlling wheelchairs is a promising solution for many disabled individuals, specifically those who are suffering from motor neuron disease affecting the proper functioning of their motor units. Almost two decades since the first work, the applicability of EEG-driven wheelchairs is still limited to laboratory environments. In this work, a systematic review study has been conducted to identify the state-of-the-art and the different models adopted in the literature. Furthermore, a strong emphasis is devoted to introducing the challenges impeding a broad use of the technology as well as the latest research trends in each of those areas.
Article
Human-computer interaction (HCI) is the bridge between systems and people. In recent times, the advancement and implementation of HCI technology have necessitated the evolution of sensors towards enhanced flexibility, reduced weight, and smaller size. Graphene, as a rapidly progressing flexible material, exhibits outstanding mechanical, electrical, thermal, and other properties. When employed in strain, physiological and various sensor types, showing high quality of performance. Meanwhile, graphene sensors exhibit flexibility, biocompatibility, and cost-effectiveness, thereby addressing prevalent limitations in current HCI sensing systems. This paper aims to delve into the requirements of HCI sensing, with a primary focus on the cutting-edge research developments of graphene sensors in areas such as flexible wearable devices and real-time health monitoring. Additionally, it summarizes the significant contributions of graphene sensors in the HCI field. Furthermore, this paper explores the challenges that graphene sensors still encounter within HCI and propose the future development trends for these sensors.
Article
Brain–computer interfaces (BCIs) establish a direct communication channel between the human brain and external devices. Among various methods, electroencephalography (EEG) stands out as the most popular choice for BCI design due to its non-invasiveness, ease of use, and cost-effectiveness. This paper aims to present and compare the accuracy and robustness of an EEG system employing one or two channels. We present both hardware and algorithms for the detection of open and closed eyes. Firstly, we utilize a low-cost hardware device to capture EEG activity from one or two channels. Next, we apply the discrete Fourier transform to analyze the signals in the frequency domain, extracting features from each channel. For classification, we test various well-known techniques, including Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Decision Tree (DT), or Logistic Regression (LR). To evaluate the system, we conduct experiments, acquiring signals associated with open and closed eyes, and compare the performance between one and two channels. The results demonstrate that employing a system with two channels and using SVM, DT, or LR classifiers enhances robustness compared to a single-channel setup and allows us to achieve an accuracy percentage greater than 95% for both eye states.
Article
italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Objective: The latest development in low-cost single-channel Electroencephalography (EEG) devices is gaining widespread attention because it reduces hardware complexity. Discrete wavelet transform (DWT) has been a popular solution to eliminate the blink artifacts in EEG signals. However, the existing DWT-based methods share the same wavelet function among subjects, which ignores the individual difference. To remedy this deficiency, this paper proposes a novel approach to eliminate the blink artifacts in single-channel EEG signals. Methods: Firstly, the forward-backward low-pass filter (FBLPF) and a fixed-length window are used to detect blink artifact intervals. Secondly, the adaptive bi-orthogonal wavelet (ABOW) is constructed based on the most representative blink signal. Thirdly, these detected signals are filtered by ABOW-DWT. The DWT's decomposition depth is automatically chosen by a similarity-based method. Results: Compared to eight state-of-the-art methods, experiments on semi-simulated and real EEG signals demonstrate the proposed method's superiority in removing the blink artifacts with less neural information loss. Significance: To filter the blink artifacts in single-channel EEG signals, the innovative idea of constructing an adaptive swavelet function based on the signal characteristics rather than using the conventional wavelet is proposed for the first time.
Article
Full-text available
This paper presents an important step forward towards increasing the independence of people with severe motor disabilities, by using brain–computer interfaces to harness the power of the Internet of Things. We analyze the stability of brain signals as end-users with motor disabilities progress from performing simple standard on-screen training tasks to interacting with real devices in the real world. Furthermore, we demonstrate how the concept of shared control—which interprets the user's commands in context—empowers users to perform rather complex tasks without a high workload. We present the results of nine end-users with motor disabilities who were able to complete navigation tasks with a telepresence robot successfully in a remote environment (in some cases in a different country) that they had never previously visited. Moreover, these end-users achieved similar levels of performance to a control group of 10 healthy users who were already familiar with the environment.
Article
Full-text available
A real-time stage 1 sleep detection system using a low-cost single dry-sensor EEG headset is described. This device issues an auditory warning at the onset of stage 1 sleep using the "NeuroSky Mindset," an inexpensive commercial entertainment-based headset. The EEG signal is filtered into low/high alpha and low/high beta frequency bands which are analyzed to indicate the onset of sleep. Preliminary results indicate an 81% effective rate of detecting sleep with all failures being false positives of sleep onset. This device was able to predict and respond to the onset of drowsiness preceding stage 1 sleep allowing for earlier warnings with the result of fewer sleep-related accidents.
Conference Paper
Full-text available
One of the most promising methods to assist amputated or paralyzed patients in the control of prosthetic devices is the use of a brain computer interface (BCI). The use of a BCI allows the communication between the brain and the prosthetic device through signal processing protocols. However, due to the noisy nature of the brain signal, available signal processing protocols are unable to correctly interpret the brain commands and cannot be used beyond the laboratory setting. To address this challenge, in this work we present a novel automatic brain signal recognition protocol based on vowel articulation mode. This approach identifies the mental state of imagery of open-mid and closed vowels without the imagination of the movement of the oral cavity, for its application in prosthetic device control. The method consists on using brain signals of the language area (21 electrodes) with the specific task of thinking the respective vowel. In the prosecution stage, the power spectral density (PSD) was calculated for each one of the brain signals, carrying out the classification process with a Support Vector Machine (SVM). A measurement of precision was achieved in the recognition of the vowels according to the articulation way between 84% and 94%. The proposed method is promissory for the use of amputated or paraplegic patients.
Article
Full-text available
This paper presents the classification of a three-class mental task-based brain–computer interface (BCI) that uses the Hilbert–Huang transform for the features extractor and fuzzy particle swarm optimization with cross-mutated-based artificial neural network (FPSOCM-ANN) for the classifier. The experiments were conducted on five able-bodied subjects and five patients with tetraplegia using electroencephalography signals from six channels, and different time-windows of data were examined to find the highest accuracy. For practical purposes, the best two channel combinations were chosen and presented. The three relevant mental tasks used for the BCI were letter composing, arithmetic, and Rubik's cube rolling forward, and these are associated with three wheelchair commands: left, right, and forward, respectively. An additional eyes closed task was collected for testing and used for on–off commands. The results show a dominant alpha wave during eyes closure with average classification accuracy above 90%. The accuracies for patients with tetraplegia were lower compared to the able-bodied subjects; however, this was improved by increasing the duration of the time-windows. The FPSOCM-ANN provides improved accuracies compared to genetic algorithm-based artificial neural network (GA-ANN) for three mental tasks-based BCI classifications with the best classification accuracy achieved for a 7-s time-window: 84.4% (FPSOCM-ANN) compared to 77.4% (GA-ANN). More comparisons on feature extractors and classifiers were included. For two-channel classification, the best two channels were O1 and C4, followed by second best at P3 and O2, and third best at C3 and O2. Mental arithmetic was the most correctly classified task, followed by mental Rubik's cube rolling forward and mental letter composing.
Chapter
Brain-Computer Interface (BCI) has been focusing on the development of communication tools for patients with motor disabilities. ElectroEncephaloGraphy (EEG) is commonly used in order to acquire brain electrical activity and mental states. Toward an application of brain computer interface, the aim of this paper is to detect eye blink signals from EEG signals. It develops the acquisition using BioRadio portable device and describes the methods used to pre-process these signals, and to classify the eye blinking signals using the Probabilistic Neural Network as a binary classifier. The results obtained are promising in order to be included in a neurorehabilitation application.
Conference Paper
Brain Computer Interfaces (BCIs) rely on the user's brain activity to control equipment or computer devices. Many BCIs are based on imagined movement (called active BCIs) or the fact that brain patterns differ in reaction to relevant or attended stimuli in comparison to irrelevant or unattended stimuli (called reactive BCIs). Traditionally BCIs employ visual stimuli for feedback in active BCIs or as cues in reactive BCIs. However these vision-based BCIs are not suited for people with an impaired visual system and in situations where there is a threat of visual overload. Touch-based BCIs may be a viable alternative but they have hardly been explored so far. This paper presents the state-of-the-art in touch-based BCIs. The feasibility of tactile BCIs based on event related brain potentials to localized vibrations has been shown and tactile BCIs based on steady state brain responses to different vibration frequencies can compete with their gaze-free visual counterparts. We recommend the development of specific hardware paradigms and classification algorithms to improve performance further.
Conference Paper
This paper presents a BCI (brain-computer interface) system called, "fast Phonics-to-Chinese-Character system", for individuals to write Chinese Characters using his or her brain waves. This research proposes a novel timing coding method to collocate the order of stimuli presentation. Previously, we had developed a Phonics-to-Chinese-Character system that used both the components of the P300 and the N200 of ERPs (event-related potentials) to detect the targets of a user's intention. Although the previously proposed system achieved good performance, we still tried to reduce the time spent on selecting an item. Based on experiences from previous studies, we propose four rules to collocate the timing of stimuli presentation for brain-computer interface, and then introduce a faster system. The results of the clinical experiments show that the proposed method elicits potentials of brain waves effectively for determining a target that was his/her intention. The accuracy of the system is above 95%. The proposed system is also the fastest BCI for writing Chinese characters via brain waves known today. The proposed system was verified as having high academic value and practicability.
Article
Brain signals have been investigated within the medical field for more than a century to study brain diseases like epilepsy, spinal cord injuries, Alzheimer's, Parkinson's, schizophrenia, and stroke among others. They are also used in both brain computer and brain machine interface systems with assistance, rehabilitative, and entertainment applications. Despite the broad interest in clinical applications, the use of brain signals has been only recently investigated by the scientific community as a biometric characteristic to be used in automatic people recognition systems. However, brain signals present some peculiarities, not shared by the most commonly used biometrics, such as face, iris, and fingerprints, with reference to privacy compliance, robustness against spoofing attacks, possibility to perform continuous identification, intrinsic liveness detection, and universality. These peculiarities make the use of brain signals appealing. On the other hand, there are many challenges which need to be properly addressed. The understanding of the level of uniqueness and permanence of brain responses, the design of elicitation protocols, and the invasiveness of the acquisition process are only few of the challenges which need to be tackled. In this paper, we further speculate on those issues, which represent an obstacle toward the deployment of biometric systems based on the analysis of brain activity in real life applications and intend to provide a critical and comprehensive review of state-of-the-art methods for electroencephalogram-based automatic user recognition, also reporting neurophysiological evidences related to the performed claims.