Conference PaperPDF Available

EEGlass: An EEG-Eyeware Prototype for Ubiquitous Brain-Computer Interaction


Abstract and Figures

Contemporary Head-Mounted Displays (HMDs) are progressively becoming socially acceptable by approaching the size and design of normal eyewear. Apart from the exciting interaction design prospects, HMDs bear significant potential in hosting an array of physiological sensors very adjacent to the human skull. As a proof of concept, we illustrate EEGlass, an early wearable prototype comprised of plastic eyewear frames for approximating the form factor of a modern HMD. EEGlass is equipped with an Open-BCI board and a set of EEG electrodes at the contact points with the skull for unobtrusively collecting data related to the activity of the human brain. We tested our prototype with 1 participant performing cognitive and sensorimotor tasks while wearing an established Electroencephalography (EEG) device for obtaining a baseline. Our preliminary results showcase that EEGlass is capable of accurately capturing resting state, detect motor-action and Electrooculographic (EOG) artifacts. Further experimentation is required, but our early trials with EEGlass are promising in that HMDs could serve as a springboard for moving EEG outside of the lab and in our everyday life, facilitating the design of neuroadaptive systems.
Content may be subject to copyright.
EEGlass: An EEG-Eyeware Prototype for Ubiquitous
Brain-Computer Interaction
Athanasios Vourvopoulos
Department of Bioengineering
Institute for Systems and Robotics
Instituto Superior Técnico
Universidade de Lisboa
Lisboa, Portugal
Evangelos Niforatos
Department of Computer Science
Norwegian University of Science and
Technology (NTNU)
Trondheim, Norway
Michail Giannakos
Department of Computer Science
Norwegian University of Science and
Technology (NTNU)
Trondheim, Norway
Contemporary Head-Mounted Displays (HMDs) are progressively
becoming socially acceptable by approaching the size and design
of normal eyewear. Apart from the exciting interaction design
prospects, HMDs bear signicant potential in hosting an array
of physiological sensors very adjacent to the human skull. As a
proof of concept, we illustrate EEGlass, an early wearable proto-
type comprised of plastic eyewear frames for approximating the
form factor of a modern HMD. EEGlass is equipped with an Open-
BCI board and a set of EEG electrodes at the contact points with
the skull for unobtrusively collecting data related to the activity
of the human brain. We tested our prototype with 1 participant
performing cognitive and sensorimotor tasks while wearing an
established Electroencephalography (EEG) device for obtaining a
baseline. Our preliminary results showcase that EEGlass is capa-
ble of accurately capturing resting state, detect motor-action and
Electrooculographic (EOG) artifacts. Further experimentation is
required, but our early trials with EEGlass are promising in that
HMDs could serve as a springboard for moving EEG outside of the
lab and in our everyday life, facilitating the design of neuroadaptive
Hardware Neural systems.
Head-Mounted Displays, Electroencephalography, Brain-Computer
Interfaces, Neuroadaptive Systems
ACM Reference Format:
Athanasios Vourvopoulos, Evangelos Niforatos, and Michail Giannakos.
2019. EEGlass: An EEG-Eyeware Prototype for Ubiquitous Brain-Computer
Interaction. In Adjunct Proceedings of the 2019 ACM International Joint
Conference on Pervasive and Ubiquitous Computing and the 2019 International
Symposium on Wearable Computers (UbiComp/ISWC ’19 Adjunct), September
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from
UbiComp/ISWC ’19 Adjunct, September 9–13, 2019, London, United Kingdom
©2019 Association for Computing Machinery.
ACM ISBN 978-1-4503-6869-8/19/09. . . $15.00
9–13, 2019, London, United Kingdom. ACM, New York, NY, USA, 6 pages.
In the year 2022, over 80 million Head-Mounted Display (HMD)
units are expected to ship, up from over 34 million units estimated
to be sold in 2019
. This predicted HMD proliferation is propelled by
recent advances in optics and hardware miniaturization, rendering
eyewear the next frontier for Wearable and Ubiquitous Comput-
ing (e.g., Google Glass Enterprise Edition
, Microsoft HoloLens 2
MagicLeap One
, Focals by North
and Vuzix Blade
). Undoubtedly,
an HMD uptake will actualize the vision of Augmented Reality
(AR), disrupting the way we consume information and execute our
daily chores [
]. From revisiting the groceries list to navigating
to our destination, a plethora of daily tasks will be reshaped by
AR and the unique form factor of HMDs. Except for the immense
opportunities in interaction design, the HMD form factor promises
an unprecedented contact potential with the human skull: the shell
of our brain where the higher cognitive and perceptual processes
Evidently, this “skull-contact” potential that eyewear bears could
not go unnoticed. Various commercial eyewear products claim to
utilize EEG (or EOG – electrooculography) and the contact points
with the skull to provide so-called “neurofeedback”. For example,
Narbis sunglasses
utilize three electrodes, two at the back of the
temples of the device touching the left and right mastoids, and
one at the tip of a protruding arm that touches the top of the
skull, for tracking concentration. When concentration is low, the
Narbis electrochromatic lenses start darkening for inviting the
user to focus more. Lowdown Focus by Smith
employs a more
socially acceptable design equipping a typical pair of sunglasses
with silicon electrodes at the edges of the temples that touch the
left and right mastoids. A companion mobile application connects
to the Lowdown Focus sunglasses for collecting the readings so
that one can track and increase one’s concentration levels. JINS
1 gartner-says-
worldwide-wearable- device-sales-to- grow-
UbiComp/ISWC ’19 Adjunct, September 9–13, 2019, London, United Kingdom Athanasios Vourvopoulos, Evangelos Niforatos, and Michail Giannakos
Figure 1: The EEGlass prototype worn by a user.
is perhaps the most prominent eyewear that employs near-
skull contact for providing neurofeedback. JINS MEME utilizes
two electrodes embedded in the bridge of the eyewear that touch
the nasal bone to detect concentration levels by measuring the
duration and the number of eye blinks via EOG [
]. More recently,
a US patent was published about the design of an eyewear frame
that features a exible protrusion for holding the EEG electrodes
in contact with the skull in dierent positions [
]. Beyond the
strong commercialization interest, the combination of eyewear
with EEG yields interesting niche applications in the domain of
health research and Human-Computer Interaction (HCI). e-Glass
is an EEG-enabled eyewear that employs an OpenBCI board and
a set of electrodes across the inner side of the frame for detecting
epileptic seizures [
]. PhysioHMD prototype adopts a bulky “mask”
design that encases a portion of the face for hosting a wide range
of physiological sensors, including EEG electrodes, capturing also
facial expressions [
]. PhysioHMD is intended as a platform that
informs the design of AR and Virtual Reality (VR) experiences.
On one hand, commercial approaches that combine modern eye-
wear with EEG (and EOG) are in general socially acceptable, but also
rather limited in providing high-level information about brain activ-
ity (e.g., daily concentration levels), typically via a dedicated mobile
application. Moreover, commercial “neurofeedback” products are
considered “black-box” systems that utilize proprietary hardware
and software. On the other hand, experimental EEG-eyewear proto-
types produce ne-grained information about brain activity, while
utilizing open hardware and software solutions. However, such
prototypes are by default too dorky to wear outside a research lab,
and cumbersome to use for extended periods of time. The EEGlass
prototype attempts to fuse the social acceptability and increased
Last accessed on July 3, 2019.
Figure 2: The 10-10 system of electrode placement topology
for the EEGlass and Enobio 8.
“wearability” of commercial EEG-eyewear with the high informa-
tion granularity and openness of experimental EEG-eyewear (see
Figure 1). The outcomes of such a successful fusion remain tenta-
tive but can potentially nurture existing and envisioned cognitive
systems [
], democratize EEG, and eventually pave the way for
touch-less input.
The human brain is an electrochemical system that generates a com-
bination of dynamic biosignals or action potentials. EEG is the most
common brain signal acquisition technique established almost a
century ago [
]. Non-invasive EEG utilizes scalp-contact electrodes
for capturing the combined electrical activity of populations of
excitable cells known as neurons. When neurons activate, they
produce electromagnetic elds of discrete potential patterns, dis-
tinguished by dierent wave oscillations in the frequency domain.
These patterns are ascribed to dierent states of mental activity
and are identied by the wave oscillations they cause in the fre-
quency domain, known as EEG bands or rhythms [
]. EEG rhythms
are divided into dierent frequency ranges including Delta (1–4
Hz), Theta (4–8 Hz), Alpha (8–13 Hz), Beta (13–30 Hz) and Gamma
(25–90 Hz) [
], and each rhythm or combination of rhythmic ac-
tivity is linked to dierent mental states. For example, rhythms
in the Alpha and Beta frequency bands are functionally related
to major sensorimotor systems, which activate primarily through
motor preparation or execution [
]. Alpha and Theta oscillations
are known to reect cognitive and memory performance [
], and
Theta was shown by early EEG studies to be closely connected to
problem-solving, perceptual processing and learning [
]. Delta
rhythm is related to concentration, attention and internal process-
ing [
], whereas Gamma has been linked to consciousness and sense
of self, and can be volitionally modulated during meditation [
EEGlass: An EEG-Eyeware Prototype for Ubiquitous Brain-Computer Interaction UbiComp/ISWC ’19 Adjunct, September 9–13, 2019, London, United Kingdom
Figure 3: The EEGlass electrode contact points and the re-
spective OpenBCI channels.
Interpreting cognitive states or motor intentions from dierent
EEG rhythms is a complex process and is impossible to associate a
single frequency range, or cortical location, to a brain function.
Nowadays, measuring oscillatory brain activity with EEG is
utilized for linking the human brain with computers via Brain-
Computer Interfaces (BCIs). BCIs have been successfully utilized in
the medical domain where BCIs enable amputees to gain control
over prosthetic limbs [
]. Other BCI application areas include mon-
itoring user’s cognitive states (e.g., attention levels and workload),
gaming and rehabilitation. Thus, BCIs and wearable technologies
such as HMDs, oer a unique opportunity to measure user needs
over time and in real-life settings, informing how critical software
aspects (e.g., interface) should respond or adapt. The potential of
EEGlass in monitoring surreptitiously and in real-time user’s phys-
iological and cognitive states, renders it an ideal BCI for facilitating
interaction with neuroadaptive systems.
The EEGlass prototype is comprised of plastic eyewear frames that
can be tted with a Google Glass HMD. We opted for this type of
eyewear for: (1) low cost, (2) availability, (3) good tting and (4)
modern HMD resemblance. In fact, the selected eyewear frames
follow the trend in the eld of HMDs: hardware miniaturization
and social acceptability [
]. The EEG system that we embedded
in the frames is Cyton Biosensing Board by OpenBCI (OpenBCI,
NY, USA). OpenBCI is a popular and aordable open hardware
and software platform for the collection and analysis of biosignals
such as EEG, EMG (Electromyography), ECG (Electrocardiography)
and others, inspired by the grassroots movement of DIY (“Do It
Yourself”) [
]. The Cyton board encompasses 8 biopotential input
channels (for hosting up to 8 electrodes), a 3-axes accelerometer,
local storage, wireless communication modules, while being fully
programmable and Arduino compatible. Evidently, the EEGlass
electrode topology is restricted by the eyewear form factor and at
the contact points with the skull. Thus, EEGlass utilizes 3 electrodes
(plus 2 for reference and ground) based on the 10-10 system (see
Figure 2) for measuring brain activity: 1 electrode placed inwards
at the top of the eyewear bridge touching the skull at glabella,
and 2 more electrodes at the inner side of the eyewear temples,
touching the left and right mastoids, behind the left and right ears,
Figure 4: A user performing a motor task while wearing both
EEGlass and Enobio 8 EEG systems.
respectively (see Figure 3). The reference and ground electrodes
are placed at the inner part of the eyewear bridge, touching the left
and right sides of the nasal bone, respectively (see Figure 3).
In this work, we explore if the low spatial resolution of a “skull-
peripheral” electrode topology, imposed by the form factor of a
modern HMD, can approximate the accuracy of a typical electrode
topology utilized in EEG studies. To this end, we used Enobio 8
(Neuroelectrics, Barcelona, Spain) EEG system for forming a base-
line. Enobio 8 is a wireless, 8-channel, EEG system with a 3-axes
accelerometer for the recording and visualization of 24-bit EEG
data at 500 Hz. The spatial distribution of electrodes for our trials
followed the 10-10 system conguration, with electrodes placed
over the frontal area (Fpz), central (C3, C4), and parietal (Pz) (see
Figure 2). The electrodes were referenced and grounded to the right
ear lobe, and the electrode impedance was kept at
. Both the
Enobio 8 and the OpenBCI (embedded in the EEGlass) EEG systems
were connected via Bluetooth to a dedicated desktop computer for
raw signal acquisition and processing.
The EEG acquisition session began with a 4-minute period for
acquiring resting state data, and the motor-action session following
next. The resting state data was acquired during alternating 1-
minute periods with eyes open and eyes closed. The subject was
instructed to remain silent while either xating his eye-gaze on a
white cross displayed on a computer screen, or when having his
eyes closed. In the motor-action session, we employed the Graz-
BCI paradigm [
] to display a random sequence of directional left
and right arrows on a computer screen (see Figure 4). When an
arrow appeared, the user responded to the stimulus by performing
a motor-action with the corresponding hand. The motor-action
session was congured to acquire data in 24 blocks (epochs) per
class (left and right hand arrow).
UbiComp/ISWC ’19 Adjunct, September 9–13, 2019, London, United Kingdom Athanasios Vourvopoulos, Evangelos Niforatos, and Michail Giannakos
(a) EEGlass resting state. (b) Enobio 8 resting state.
Figure 5: Resting states for EEGlass and Enobio 8. Both EEG systems detect a clear Alpha rhythm peak at 8-12 Hz.
For both systems, we used the OpenVibe acquisition servers for
simultaneous EEG signal acquisition [
]. Next, we used the Open-
Vibe designer for obtaining the raw EEG streams from both servers
and synchronizing them with the stimuli, before storing them in a
.gdf le. We processed the acquired EEG signals in MATLAB
MathWorks, MA, USA) with the EEGLAB toolbox [
]: after import-
ing the data and the channel information, we applied a high-pass
lter at 1 Hz to remove the “baseline drift” followed by line-noise
and harmonics removal at 50 Hz. Then, we used Welch’s method
] for Power Spectral Density (PSD) of the power spectrum to
compute the average spectral power across the following frequency
bands during resting state: Delta (1–4 Hz), Theta (4–7 Hz), Alpha
(8–12 Hz), and Beta (12–30 Hz). The event-related synchroniza-
tion/desynchronization (ERS/ERD) was extracted following the
standard ERS/ERD method [
] across the Alpha band power (8–12
Hz) and the Beta band power (12–30 Hz) over C3 and C4 electrode
locations for the Enobio 8 system, and TP9, TP10 for the OpenBCI,
respectively. We calculated the ERD by using the following formula:
(PowerM ot or Ac ti vit yPowerB ase l ine )
PowerBase li ne
100 (1)
Early results from comparing EEG signals acquired via EEGlass
with Enobio 8 indicate that EEGlass captures very closely the band
power of Enobio 8, but also the the decrease of oscillatory activity
Table 1: Resting state and EEG rhythms in µV2/Hz recorded
by EEGlass and Enobio 8 EEG systems.
System Delta Theta Alpha Beta
EEGlass 21.279 4.775 12.992 0.431
Enobio 8 4.984 2.196 13.397 0.2993
(ERD) during the motor-action, as we anticipated. Figure 5 shows
that despite the fundamentally dierent electrode topology of EE-
Glass, both EEGlass and Enobio detect a clear Alpha peak (8–12
Hz) in the electrical activity of the brain during resting state. Ta-
ble 1 summarizes the recorded EEG rhythms during resting state
for both EEGlass and Enobio 8 EEG systems. For investigating if
brain activity linked to motor-action diered substantially between
the two EEG systems, we compared the average ERD between lat-
eral electrodes for both systems:
for right-hand, and
for left-hand movement. Dependent samples t-tests
displayed signicant dierences in the average ERD between both
pairs of lateral electrodes (
tT P 9|C3(
,p< .
001 and
tC P 10|C4(
),p< .
001), as shown in Figure 6 and summarized in
Table 2. This indicates that the captured brain activity related to
upper limb motor-action diered signicantly between the EEGlass
and Enobio 8 EEG systems. Moreover, Figure 7 showcases that
EEGlass was able to detect basic EOG activity related to eye move-
ment in 4 primary directions: up, down, left and right. Although
the current electrode setup can capture eye-movement with only 1
degree-of-freedom (DoF), it can also detect saccadic eye movement
and eye blinks.
Table 2: Average desynchronization (% ERD) between 8–24
Hz per electrode and hand for both EEGlass and Enobio 8
EEG systems.
EEGlass Enobio 8
Electrode TP9 TP10 C3 C4
Movement Right Left Right Left
Mean -5.058 -3.065 -20.272 -18.899
SD 11.6383 12.273 19.048 22.47
EEGlass: An EEG-Eyeware Prototype for Ubiquitous Brain-Computer Interaction UbiComp/ISWC ’19 Adjunct, September 9–13, 2019, London, United Kingdom
ERD (%)
(a) Left hand ERD as captured by CP10 (EEGlass) and C4 (Enobio 8) electrodes.
ERD (%)
(b) Right hand ERD as captured by TP9 (EEGlass) and C3 (Enobio 8) electrodes.
Figure 6: Average motor-action (ERD) for lateral electrode pairs of both EEGlass and Enobio 8 EEG Systems. Signicant dier-
ences (p< .001) were found between both electrode pairs for both right and left hand motor-action.
Our preliminary results serve as a proof of concept for piggybacking
EEG on eyewear and HMDs, yet in a cost-eective, unobtrusive
and socially acceptable fashion. Trials with 1 participant indicate
that the EEGlass is capable of capturing brain activity manifested
in two modes of resting state: (a) eyes open and focused on a target,
and (b) eyes closed. In fact, brain activity recorded during resting
state with EEGlass demonstrates similar variations in frequency
and amplitude to when recorded with an established EEG system
such as Enobio 8. However, recorded brain activity linked to upper
limb motor-action and captured with EEGlass, displayed signicant
dierences when compared to that captured with Enobio 8, as
anticipated due to the localized motor activity of the sensorimotor
cortices under C3, C4 electrodes. Yet, EEGlass managed to capture
ERD from its TP9 and TP10 electrodes, though with less power than
Enobio 8 due to the low spatial resolution of the EEG signals on the
scalp, relying on signal propagation to reach the remotely located
EEGlass electrodes. Moreover, EEGlass was able to detect subtle
eye movements in 4 basic directions, displaying an eye-tracking
potential particularly useful for navigating in heads-up interfaces.
Undoubtedly, low sample size (N=1) and stationary experimental
settings are signicant limitations that we will address in followup
studies. However, human skull and brain anatomy is universally
homogeneous, and the eyewear/HMD form factor ensures a rather
stable electrode contact, only somewhat inuenced by movement.
In future iterations, we will utilize prominent machine learning
techniques for training algorithms to match input from EEGlass
to that of established EEG systems, and test our prototype in user
studies with actual HMDs. We believe a merger between EEG and
HMDs bears an unprecedented potential to revolutionize human-
machine interaction, facilitating touch-less input and greatly in-
creasing human-machine communication throughput.
The authors acknowledge the nancial support of the Swiss Na-
tional Science Foundation (SNSF) under grant number: 184146
We would also like to thank Ph.D. student Octavio Marin Pardo for
helping with the data acquisition.
Hans Berger. 1933. Über das elektrenkephalogramm des menschen. European
archives of psychiatry and clinical neuroscience 98, 1 (1933), 231–254.
Guillermo Bernal, Tao Yang, Abhinandan Jain, and Pattie Maes. 2018. PhysioHMD:
a conformable, modular toolkit for collecting physiological data from head-
mounted displays. In Proceedings of the 2018 ACM International Symposium on
Wearable Computers. ACM, 160–167.
Mark Billinghurst and Thad Starner. 1999. Wearable devices:new ways to manage
information. Computer 32, 1 (1999), 57–64.
György Buzsáki and Andreas Draguhn. 2004. Neuronal oscillations in cortical
networks. science 304, 5679 (2004), 1926–1929.
Robert A Connor. 2018. EEG glasses (electroencephalographic eyewear). US
Patent 9,968,297.
Nathan E Crone, Diana L Miglioretti, Barry Gordon, and Ronald P Lesser. 1998.
Functional mapping of human sensorimotor cortex with electrocorticographic
spectral analysis. II. Event-related synchronization in the gamma band. Brain: a
journal of neurology 121, 12 (1998), 2301–2315.
Arnaud Delorme and Scott Makeig. 2004. EEGLAB: an open source toolbox for
analysis of single-trial EEG dynamics including independent component analysis.
Journal of neuroscience methods 134, 1 (2004), 9–21.
Thalía Harmony, Thalía Fernández, Juan Silva, Jorge Bernal, Lourdes Díaz-Comas,
Alfonso Reyes, Erzsébet Marosi, Mario Rodríguez, and Miguel Rodríguez. 1996.
EEG delta activity: an indicator of attention to internal processing during perfor-
mance of mental tasks. International journal of psychophysiology 24, 1-2 (1996),
Wolfgang Klimesch. 1999. EEG alpha and theta oscillations reect cognitive and
memory performance: a review and analysis. Brain research reviews 29, 2-3 (1999),
Dietrich Lehmann, PL Faber, Peter Achermann, Daniel Jeanmonod, Lorena RR Gi-
anotti, and Diego Pizzagalli. 2001. Brain sources of EEG gamma frequency during
volitionally meditation-induced, altered states of consciousness, and experience
of the self. Psychiatry Research: Neuroimaging 108, 2 (2001), 111–121.
Ernst Niedermeyer and FH Lopes da Silva. 2005. Electroencephalography: basic
principles, clinical applications, and related elds. Lippincott Williams & Wilkins.
Evangelos Niforatos and Mélodie Vidal. 2019. Eects of a Monocular Laser-
Based Head-Mounted Display on Human Night Vision. In Proceedings of the 10th
Augmented Human International Conference 2019. ACM, 31.
UbiComp/ISWC ’19 Adjunct, September 9–13, 2019, London, United Kingdom Athanasios Vourvopoulos, Evangelos Niforatos, and Michail Giannakos
Figure 7: Electrooculographic (EOG) activity (µV/ms) recorded by EEGlass and linked to eye movement in 4 basic directions.
Evangelos Niforatos, Athanasios Vourvopoulos, and Marc Langheinrich. 2017.
Amplifying human cognition: bridging the cognitive gap between human and
machine. In Proceedings of the 2017 ACM International Joint Conference on Per-
vasive and Ubiquitous Computing and Proceedings of the 2017 ACM International
Symposium on Wearable Computers. ACM, 673–680.
Gert Pfurtscheller and A Aranibar. 1979. Evaluation of event-related desyn-
chronization (ERD) preceding and following voluntary self-paced movement.
Electroencephalography and clinical neurophysiology 46, 2 (1979), 138–146.
Gert Pfurtscheller and Christa Neuper. 2001. Motor imagery and direct brain-
computer communication. Proc. IEEE 89, 7 (2001), 1123–1134.
Gert Pfurtscheller, Christa Neuper, GR Muller, Bernhard Obermaier, Gunter
Krausz, A Schlogl, Reinhold Scherer, Bernhard Graimann, Claudia Keinrath,
Dimitris Skliris, et al
2003. Graz-BCI: state of the art and clinical applications.
IEEE Transactions on neural systems and rehabilitation engineering 11, 2 (2003),
Yann Renard, Fabien Lotte, Guillaume Gibert, Marco Congedo, Emmanuel Maby,
Vincent Delannoy, Olivier Bertrand, and Anatole Lécuyer. 2010. Openvibe: An
open-source software platform to design, test, and use brain–computer interfaces
in real and virtual environments. Presence: teleoperators and virtual environments
19, 1 (2010), 35–53.
Daniel L Schacter. 1977. EEG theta waves and psychological phenomena: A
review and analysis. Biological psychology 5, 1 (1977), 47–82.
Dionisije Sopic, Amir Aminifar, and David Atienza. 2018. e-glass: A wearable
system for real-time detection of epileptic seizures. In 2018 IEEE International
Symposium on Circuits and Systems (ISCAS). IEEE, 1–5.
Yuji Uema and Kazutaka Inoue. 2017. JINS MEME algorithm for estimation and
tracking of concentration of users. In Proceedings of the 2017 ACM International
Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the
2017 ACM International Symposium on Wearable Computers. ACM, 297–300.
Athanasios Vourvopoulos and Sergi Bermudez i Badia. 2016. Usability and Cost-
eectiveness in Brain-Computer Interaction: Is it User Throughput or Technology
Related?. In Proceedings of the 7th Augmented Human International Conference
2016. ACM, 19.
Peter Welch. 1967. The use of fast Fourier transform for the estimation of power
spectra: a method based on time averaging over short, modied periodograms.
IEEE Transactions on audio and electroacoustics 15, 2 (1967), 70–73.
... In addition, a real-world application of such a pBCI approach would need a sufficiently usable sensor system available. Research investigating such systems shows different approaches that might lead to ubiquitous solutions supporting the intended in-car application (Zander et al., 2017;Kosmyna and Maes, 2019;Vourvopoulos et al., 2019;Hölle et al., 2021). The recognition of a face combined with the intentional gaze would be a good indication of the intent to interact. ...
Full-text available
An automated recognition of faces enables machines to visually identify a person and to gain access to non-verbal communication, including mimicry. Different approaches in lab settings or controlled realistic environments provided evidence that automated face detection and recognition can work in principle, although applications in complex real-world scenarios pose a different kind of problem that could not be solved yet. Specifically, in autonomous driving—it would be beneficial if the car could identify non-verbal communication of pedestrians or other drivers, as it is a common way of communication in daily traffic. Automated identification from observation whether pedestrians or other drivers communicate through subtle cues in mimicry is an unsolved problem so far, as intent and other cognitive factors are hard to derive from observation. In contrast, communicating persons usually have clear understanding whether they communicate or not, and such information is represented in their mindsets. This work investigates whether the mental processing of faces can be identified through means of a Passive Brain-Computer Interface (pBCI). This then could be used to support the cars' autonomous interpretation of facial mimicry of pedestrians to identify non-verbal communication. Furthermore, the attentive driver can be utilized as a sensor to improve the context awareness of the car in partly automated driving. This work presents a laboratory study in which a pBCI is calibrated to detect responses of the fusiform gyrus in the electroencephalogram (EEG), reflecting face recognition. Participants were shown pictures from three different categories: faces, abstracts, and houses evoking different responses used to calibrate the pBCI. The resulting classifier could distinguish responses to faces from that evoked by other stimuli with accuracy above 70%, in a single trial. Further analysis of the classification approach and the underlying data identified activation patterns in the EEG that corresponds to face recognition in the fusiform gyrus. The resulting pBCI approach is promising as it shows better-than-random accuracy and is based on relevant and intended brain responses. Future research has to investigate whether it can be transferred from the laboratory to the real world and how it can be implemented into artificial intelligences, as used in autonomous driving.
... The PhysioHMD system [199] (see Figure 6) merges several types of biometric sensors to an HMD and collects sEMG, EEG, EDA, ECG, and eye-tracking data. EEglass [200] is a prototype of an HMD employing EEG for BCI. Luong et al. [201] estimated the mental workload of VR applications in real-time with the aid of physiological sensors embedded in the HMD. ...
Full-text available
When designing extended reality (XR) applications, it is important to consider multimodal interaction techniques, which employ several human senses simultaneously. Multimodal interaction can transform how people communicate remotely, practice for tasks, entertain themselves, process information visualizations, and make decisions based on the provided information. This scoping review summarized recent advances in multimodal interaction technologies for head-mounted display-based (HMD) XR systems. Our purpose was to provide a succinct, yet clear, insightful, and structured overview of emerging, underused multimodal technologies beyond standard video and audio for XR interaction, and to find research gaps. The review aimed to help XR practitioners to apply multimodal interaction techniques and interaction researchers to direct future efforts towards relevant issues on multimodal XR. We conclude with our perspective on promising research avenues for multimodal interaction technologies.
... Hence, our findings, even though in line with these reported by the scientific literature that used traditional systems (e.g., desktop monitors) to mediate the VR environment, may not be completely comparable. Even though increasingly popular (see e.g., applications as the ones presented inCattan et al., 2019;Vourvopoulos et al., 2019), the use of EEG together with HMDs may be problematic due to interferences between the systems. However, recent analyses ...
Full-text available
Sense of presence has been often explored in the context of virtual reality (VR) and immersive visual technologies; however, standardized and objective measures of the sense of presence have been difficult to find. Studies attempting to find physiological correlates of sense presence using electroencephalography (EEG) have reported mixed results. In the present study, we used brain event‐related potentials (ERPs) elicited by auditory stimuli to identify an objective physiological index of sense of presence during VR, attempting to replicate the findings of previous studies and explain the heterogeneity of results reported in the literature. Participants in our experiment were asked to experience an immersive virtual environment using a modern head‐mounted display while passively hearing task‐irrelevant frequent standard and infrequent deviant tones as in a classic auditory oddball paradigm. Subsequently, they were asked to complete a battery of questionnaires aimed to estimate their sense of presence during the VR. EEG and questionnaire data from three‐seventh participants were analyzed. ERP components evoked by the auditory stimuli were then analyzed. Late ERP components (after 450 ms from stimulus onset) registered over central brain areas were associated with the sense of presence as measured with questionnaires, while earlier components were not associated with presence. The use of different questionnaires and the content of the VR environment may both be a plausible explanation for heterogeneous results as reported in previous studies. The present study showed that late ERP components recorded over the central brain may represent good electrophysiological correlates of the subjective sense of presence.
... The reasons for adopting XR technology for learning include student safety, repeatable learning, reduced educational costs, and overcoming spatio-temporal limitations. Various types of user interfaces such as EEG devices are implemented for brain-computer interaction [1]. XR-assisted training is used in diverse domains, including military training, medical training, government officials, school education, and manufacturing. ...
... Undoubtedly, BCIs have not only been proven to be important tools in the medical domain as either assistive or restorative interfaces but also have introduced a unique form for the human-computer interaction (HCI) paradigm [10]. In the last few years, BCIs have progressed as an emerging research area in the fields of HCI and interactive systems, primarily due to the introduction of low-cost EEG systems that render BCI technology accessible for out of the lab research moving closer to the integration with wearable technology [11]. Consequently, BCIs provide a wide new range of possibilities in the way users interact with a computer system (e.g., neuroadaptive interfaces). ...
Full-text available
In the last few years,Brain-Computer Interfaces (BCIs) have progressed as an emerging research area in the fields of human-computer interaction and interactive systems.This is primarily due to the introduction of low-cost electroencephalographic (EEG) systems that render BCI technology accessible for non-medical research but also due to the advancements of signal processing and machine learning methods.Consequently,BCIs could provide a wide new range of possibilities in the way users interact with a computer system (e.g., neuroadaptive interfaces).However,major challenges must still be addressed for BCI systems to mature into an established communication medium for effective human-computer interaction. One of the major challenges involves the easy integration of real-time processing pipelines with portable EEG systems for an out-of-the-lab use. To date, despite the amount of options current open-source tools provide, most toolboxes focus mainly in extending the processing and classification methods but lack on the ability to provide an easy-to-design yet extensible architecture for ubiquitous use.Here, we present NeuXus, a modular toolbox in Python for real-time biosignal processing and pipeline design.NeuXus is open-source and platform independent,providing high-level implementation of processing pipelines for easy BCI design and deployment.
Measuring biometric information helps us estimate the users’ excitement degree and their negative and positive emotions. By measuring a person’s biometric information while experiencing the virtual reality (VR), it is possible to interactively change the content according to the estimated emotional state of the person. However, the hassle and discomfort of wearing the sensor interferes with the VR experience, and the body motion caused by the VR experience prevents accurate measurement. Therefore, some studies have developed devices that incorporate biometric measurement sensors into the head mounted displays (HMDs). Since we use HMDs by pressing them against our faces, biometric sensing by HMDs is resistant to body movements and can reduce the discomfort of sensor attachment. This paper introduces our research on HMDs with embedded sensors and our previous study as part of this project. This paper introduces the various biological sensing HMDs including our research and discusses VR applications using those HMDs.
We present a unified deep learning framework for user identity recognition and imagined action recognition, based on electroencephalography (EEG) signals. Our solution exploits a novel phased subsampling preprocessing step as a form of data augmentation, and a mesh-to-image representation to encode the inherent local spatial relationships of multi-electrode EEG signals. The resulting image-like data is then fed to a convolutional neural network, to process the local spatial dependencies, and eventually analyzed through a Bidirectional LSTM module, to focus on temporal relationships. Our solution is compared against several methods in the state of the art, showing comparable or superior performance on different tasks. Preliminary experiments are also conducted in order to direct future works towards everyday applications relying on a reduced set of EEG electrodes.
Evolution has always been the main driving force of change for both the human body and brain. Presently, in the Information era, our cognitive and perceptual capacities cannot merely rely on natural evolution to keep up with the immense advancements in modern technologies. But systems we use daily (e.g. computers, smartphones, etc.) remain mostly unaware about our current state, causing what has been described as the “cognitive gap”—the inability of systems to adapt to the current cognitive and circadian state of the user (Niforatos et al. 2017). In this edited volume, authors contribute ideas and investigations into bridging this gap by bringing the machine (system) closer to the human (user). From improving our working memory, our ability to retain and learn new information to extending our perceptual and executive capabilities with wearable or implantable hardware, modern technologies bear an unprecedented potential to seize the role of natural evolution for humans. One should tread lightly in this “Brave New World” of Human Augmentation, however. In this final chapter, we summarize the key contributions of each chapter in this book, assume a philosophical standpoint over augmentation technologies and share our vision on their future outlook.
Full-text available
In recent years, increasing evidence of the positive impact of Virtual Reality (VR) on neurofeedback training has emerged. The immersive properties of VR training scenarios have been shown to facilitate neurofeedback learning while leading to cognitive enhancements such as increased working memory performance. However, in the design of an immersive VR environment, there are several covariates that can influence the level of immersion. To date, the specific factors which contribute to the improvement of neurofeedback performance have not yet been clarified. This research aims to investigate the effects of vividness in a Cave automatic virtual environment (CAVE-VR) on neurofeedback training outcome, and to assess the effect on working memory performance. To achieve this, we recruited 21 participants, exposed to neurofeedback training inside a CAVE-VR environment. Participants were divided into three experimental groups, each of which received feedback in a different neurofeedback training scenario with increasing level of vividness (i.e., low, medium, high) while also assessing the effect of neurofeedback on working memory performance. Current findings show that highly vivid feedback in CAVE-VR results in increased neurofeedback performance. In addition, highly vivid training scenarios had a positive effect on user’s motivation, concentration, and reduced boredom. Finally, current results corroborate the efficacy of the neurofeedback enhancement protocol in CAVE-VR for improving working memory performance.
Conference Paper
Full-text available
Head-mounted displays (HMDs) are expected to dominate the market of wearable electronics in the next 5 years. This foreseen proliferation of HMDs yields a plethora of design opportunities for revolutionizing everyday life via novel use cases, but also generates a considerable number of substantial safety implications. In this work, we systematically investigated the effect of a novel monocular laser-based HMD on the ability of our participants to see in low ambient light conditions in lab settings. We recruited a total of 19 participants in two studies and performed a series of established vision tests while using the newly available Focals by North HMD. We tested our participants' night vision after being exposed to different levels of laser luminous power and laser colors while using Focals, either with one or both eyes open. Our results showcase that the image perceived by the non-exposed eye compensates for the loss of contrast sensitivity observed in the image perceived by the laser-exposed eye. This indicates that monocular laser-based HMDs, such as Focals, permit dark adaptation to occur naturally for the non-exposed eye.
Conference Paper
Full-text available
Evolution has always been the main driving force of change for both the human body and brain. Presently, in the Information era, our cognitive capacities cannot simply rely on natural evolution to keep up with the immense advancements in the field of Ubiquitous technologies , which remain largely uninformed about our cognitive states. As a result, a so-called " cognitive gap " is forming between the human (users) and the machine (systems) preventing us from fully harnessing the benefits of modern technologies. We argue that a " cogni-tive information layer " , placed in-between human and machine, could bridge that gap, informing the machine side about aspects of our cognition in real time (e.g., attention levels). In this position paper, we present our vision for such a software architecture, we describe how it could serve as a framework for designing and developing cognition-aware applications, and we show-case some application scenarios as a roadmap towards human-machine convergence and symbiosis.
Conference Paper
Full-text available
In recent years, Brain-Computer Interfaces (BCIs) have been steadily gaining ground in the market, used either as an implicit or explicit input method in computers for accessibility, entertainment or rehabilitation. Past research in BCI has heavily neglected the human aspect in the loop, focusing mostly in the machine layer. Further, due to the high cost of current BCI systems, many studies rely on low-cost and low-quality equipment with difficulties to provide significant advancements in physiological computing. Open-Source projects are offered as alternatives to expensive medical equipment. Nevertheless, the effectiveness of such systems over their cost is still unclear, and whether they can deliver the same level of experience as their more expensive counterparts. In this paper, we demonstrate that effective BCI interaction in a Motor-Imagery BCI paradigm can be accomplished without requiring high-end/high-cost devices, by analyzing and comparing EEG systems ranging from open source devices to medically certified systems.
Conference Paper
Virtual and augmented reality headsets are unique as they have access to our facial area: an area that presents an excellent opportunity for always-available input and insight into the user's state. Their position on the face makes it possible to capture bio-signals as well as facial expressions. This paper introduces the PhysioHMD, a software and hardware modular interface built for collecting affect and physiological data from users wearing a head-mounted display. The PhysioHMD platform is a flexible architecture enables researchers and developers to aggregate and interprets signals in real-time, and use those to develop novel, personalized interactions and evaluate virtual experiences. Offering an interface that is not only easy to extend but also is complemented by a suite of tools for testing and analysis. We hope that PhysioHMD can become a universal, publicly available testbed for VR and AR researchers.
Conference Paper
Activity tracking using a wearable device is an emerging research field. Large-scale studies on activity tracking performed with eyewear-type wearable devices remains a challenging area owing to the negative effect such devices have on users' looks. To cope with this challenge, JINS Inc., an eyewear retailer in Japan, has developed a state-of-the-art smart eyewear called JINS MEME. The design of JINS MEME is almost the same as that of Wellington-type eyeglasses so that people can use it in their daily lives. JINS MEME is equipped with sensors to detect a user's eye movement and head motion. In addition to these functions of JINS MEME, JINS developed an application to measure concentration levels of users. In this demonstration, users will experience wearing the JINS MEME glasses and their concentration will be measured while they perform a certain task at our booth.
The Graz Brain-Computer Interface (BCI) analyses and classifies the dynamics of oscillatory EEG components during motor imagery. At this time a patient controls the closing and opening of a hand orthosis with a BCI and students are able to write error free with a spelling rate of 1 character/minute.
Established in 1982 as the leading reference on electroencephalography, Drs. Niedermeyer's and Lopes da Silva's text is now in its thoroughly updated Fifth Edition. An international group of experts provides comprehensive coverage of the neurophysiologic and technical aspects of EEG, evoked potentials, and magnetoencephalography, as well as the clinical applications of these studies in neonates, infants, children, adults, and older adults. This edition includes digital EEG and advances in areas such as neurocognition. Three new chapters cover the topics of Ultra-Fast EEG Frequencies, Ultra-Slow Activity, and Cortico-Muscular Coherence. Hundreds of EEG tracings and other illustrations complement the text.