Conference PaperPDF Available

fNIRS: A New Modality for Brain Activity-Based Biometric Authentication

Authors:

Figures

Content may be subject to copyright.
fNIRS: A New Modality for Brain Activity-Based Biometric Authentication
Abdul Serwadda Vir V. Phoha Sujit Poudel Leanne M. Hirshfield
Danushka Bandara Sarah E. Bratt Mark R. Costa
Syracuse University, Syracuse, NY 13210
{aserwadd,vvphoha,spoudel,lmhirshf,dsbandar,sebratt,mrcosta}@syr.edu
Abstract
There is a rapidly increasing amount of research on
the use of brain activity patterns as a basis for biomet-
ric user verification. The vast majority of this research is
based on Electroencephalogram (EEG), a technology which
measures the electrical activity along the scalp. In this
paper, we evaluate Functional Near-Infrared Spectroscopy
(fNIRS) as an alternative approach to brain activity-based
user authentication. fNIRS is centered around the measure-
ment of light absorbed by blood and, compared to EEG, has
a higher signal-to-noise ratio, is more suited for use during
normal working conditions, and has a much higher spatial
resolution which enables targeted measurements of specific
brain regions. Based on a dataset of 50 users that was anal-
ysed using an SVM and a Na¨
ıve Bayes classifier, we show
fNIRS to respectively give EERs of 0.036 and 0.046 when
using our best channel configuration. Further, we present
some results on the areas of the brain which demonstrated
highest discriminative power. Our findings indicate that
fNIRS has significant promise as a biometric authentication
modality.
1. Introduction
Given the well known drawbacks of password-based au-
thentication, there is now a significant amount of interest
in the Active Authentication paradigm (e.g., see recent re-
search efforts such as DARPA’s Active Authentication pro-
gram [1], and AFRL’s Mobile Android Multi-Biometric Ac-
quisition program [2]). The major security benefit offered
by Active Authentication (AA) stems from the fact that a
user is monitored throughout a session of interaction with
the computing device, making it exceedingly difficult for a
masquerader to pose as the genuine user. This is in stark
contrast to a password-based authentication setting where a
user is verified once before being granted access, leaving the
system vulnerable to any adversary holding the password.
While AA is perhaps best understood from the perspec-
tive of behavioral patterns (e.g., keystroke, mouse and touch
dynamics), recent work has revealed that neural patterns,
such as those manifested by a user’s brain activity during
different mental tasks, also hold significant promise as an
AA modality. Monitoring these patterns using Electroen-
cephalogram (EEG) sensors, several studies have show-
cased classification accuracies in the 80% to 90% range
(e.g., see [16][14]) depending on the tasks being performed
by the users during authentication.
In this paper, we extend the state-of-the-art in brain
activity-based user authentication and introduce Functional
Near-Infrared Spectroscopy (fNIRS) as an AA modality.
The basic mechanism behind fNIRS is that neural activity
in the brain during different mental tasks causes changes
in blood flow, which can be measured by monitoring the
changes in (near-infrared) light absorbed by the blood (see
details in Section 3). Relative to EEG, fNIRS has a plethora
of advantages, some of which include a higher level of prac-
ticality for use in normal working conditions, a much higher
signal-to-noise ratio, and significantly higher spatial resolu-
tion (see details in Section 3). Although fNIRS is fast be-
coming mainstream in domains such as Human Computer
Interaction (see some recent works [11][23]), it is surpris-
ingly yet to receive significant attention from the biometrics
authentication community.
The only work to have examined fNIRS as a user veri-
fication modality is the 2-page abstract by Heger et al. [9].
Different from our work however, the abstract reported re-
sults based on a very small user population (of just 5 users),
used a very small fNIRS device (with less than a sixth of the
number of channels used in this work), did not provide any
insights into the traits depicted by the different features or
brain regions, and focused on the user identification prob-
lem. To the best of our knowledge, ours is the first paper to
examine fNIRS as a user authentication modality, let alone
to explore the credentials of fNIRS as a biometric modality
in details on a large dataset. The contributions of this paper
are summarized below:
1. Based on a 50-user dataset collected using a 52-
channel fNIRS device, we evaluate fNIRS as an au-
thentication modality. Respectively using an SVM and
Na¨
ıve Bayes classifier, we show fNIRS to give Equal
Error Rates of 0.043 and 0.063 when data from all 52
channels is used for authentication and Equal Error
Rates of 0.036 and 0.046 when a sub-set of channels
having highest discriminative power is used. Our work
represents the first steps towards the use of fNIRS as
an AA user authentication modality.
2. We analyze the variations of feature discriminative
power across brain regions and present findings on
which regions of the brain discriminated between users
best while they completed the simple addition tasks
that were used during our experiments. The most pre-
dictive brain regions for authentication were regions
that have been found to be highly sensitive to addition
tasks in prior neuroscience literature.
The rest of the paper is organized as follows. In Section
2, we discuss related work and provide insights into non-
invasive brain measurement in Section 3. We describe our
data collection experiments in Section 4 and present the
fNIRS performance evaluation results in Section 5. We fi-
nally make our conclusions in Section 6.
2. Related Work
A great deal of prior research has explored biometric au-
thentication using a range of physiological and behavioral
user metrics. Approaches to biometric authentication and
identification vary with respect to the features and tech-
niques used. In terms of features, users periocular regions
[4], eye gaze patterns [20], cardiac biometric patterns [8],
fingerprints [5], and facial features [12] or combinations
thereof, have all been used to support the development of
automated authentication and identification systems.
Prior research has also explored brain activity measure-
ment as a way to authenticate users. Ideally, a brain mea-
surement device suitable for biometric authentication and
identification under normal working conditions would be
non-invasive and portable. It would have fast temporal res-
olution and high spatial resolution, enabling the localiza-
tion of brain activation in specific functional brain regions.
The EEG has been the most studied device used to measure
brain activity during naturalistic human-computer interac-
tions (e.g., see [16][14]), while the relatively new fNIRS
device has been gaining momentum in several research do-
mains in recent years. The EEG measures the waves gener-
ated by cascading electrochemical signals produced by the
firing of neurons while the fNIRS captures blood flow to
brain regions to support the firing of neurons.
EEGs have been available for over one hundred years in
the research domain, while fNIRS was only introduced in
the past twenty-five years; therefore EEGs are much more
likely to be used in biometric verification research. Prior re-
search using EEGs for biometric verification involves brain
measurement while the subjects are at rest [16][3] or en-
gaged in a task that stimulates brain regions associated with
verbal [16] spatial [18], or arithmetic work [18]. However,
when compared to fNIRS, EEG devices are more suscepti-
ble to noise, both from ambient sources (e.g., electrical sys-
tems in buildings), and motion artifacts. Also, EEGs have
lower spatial resolution than fNIRS, making it difficult to
determine the actual regions of the brain that are stimulated
at any given time. EEG’s limitations make fNIRS an attrac-
tive alternative for biometric verification using brain mea-
surements.
As mentioned previously, there has only been one prior
publication studying the potential of fNIRS for user verifi-
cation. This 2-page abstract by Heger et al. [9] used a very
small user population (n=5) for user identification, making
it difficult to determine the scalability of the results and pro-
viding no insights at all into the user authentication prob-
lem that is the focus of this paper. Additionally, the fNIRS
device used by Heger et al. was only capable of measur-
ing 8 points in users brains (our configuration, described
below, collects 52 measurement locations), and they do not
include any detail about the actual regions of the brains that
were predictive of user authentication on their dataset.
3. Non-Invasive Brain Measurement
There are several brain measurement devices available
in medical and research domains. These devices monitor
brain activation by measuring several biological metrics.
When a stimulus is presented, neurons fire in the activated
region(s) of the brain, causing an electric potential, an in-
crease in cerebral blood flow in that region, an increase in
the metabolic rate of oxygen, and an increase in the volume
of blood flow. All of these factors contribute to the blood
oxygen level dependent (BOLD) signal, which can be de-
tected (in various forms) by a number of brain measurement
techniques such as fMRI, fNIRS, and PET[6]. Ideally, a
brain measurement device suitable for measuring brain ac-
tivity in typical HCI activities would be non-invasive and
portable. It would have extremely fast temporal resolution
(for use in adaptive systems) and it would have high spa-
tial resolution, enabling the localization of brain activation
in specific functional brain regions. Electroencephalograph
(EEG) and fNIRS are the two most popular devices for non-
invasive imaging of the brain. However, when compared
to EEG, fNIRS has higher spatial resolution, lower set-up
time, and a higher signal-to-noise ratio [19]. We focus on
fNIRS, as this is one of the best suited technologies for
non-invasive brain measurement during naturalistic human-
computer interactions.
3.1. Functional Near-Infrared Spectroscopy
As described above, fNIRS is a relatively new non-
invasive technique introduced in the late 1980s [7] to over-
2
come many of the drawbacks of other brain monitoring
techniques. The tool, still primarily a research modality,
uses light sources in the near infrared wavelength range
(650-850 nm) and optical detectors to probe brain activity.
Light source and detection points are defined by means
of optical fibers held on the scalp with an optical probe.
Deoxygenated (Hb) and oxygenated hemoglobin (HbO) are
the main absorbers of near infrared light in tissues during
hemodynamic and metabolic changes associated with neu-
ral activity in the brain. These changes can be detected
by measuring the diffusively reflected light that has probed
the brain cortex. fNIRS has been used in recent years to
measure a myriad of mental states such as workload, de-
ception, trust, suspicion, frustration, types of multi-tasking,
and stress [22][10].
4. Experiment
The goal of our experiment was to authenticate a partic-
ipant based solely on his or her previously acquired brain
data. Three mental tasks were chosen for the experiment,
as we were interested in learning which tasks yielded brain
data that was more predictive of participant identification.
These tasks, and the experiment protocol, are described
next.
4.1. Experiment Tasks
In the experiment, three conditions, or tasks, were cho-
sen based upon a review of psychological and neuroscience
literature to produce consistent patterns of brain activation
for the later identification of subjects. All experiment tasks
were created using Microsoft Powerpoint. The first condi-
tion that subjects were given was called Phone-number re-
call. During this task, participants were instructed to think
of their phone number repeatedly for a twenty second pe-
riod of time. The next condition was called Addition. This
task began with a slide instructing participants to start with
x, where x was a small number under 10, such as 5. Next,
new slides appeared with instructions such as add 6 or add
9 (with no values greater than 9 to be added at a time). Each
addition slide was displayed for 2 seconds, and participants
were told to keep a running sum as new numbers appeared.
The last slide of the addition section instructed the partici-
pant to tell the experimenter the total sum of all of the num-
bers. The third mental task was called Controlled Rest. Par-
ticipants were told to relax and clear their minds during this
task. The controlled rest task was included in order to deter-
mine if it was possible to identify participants during their
resting state.
4.2. Experiment Protocol
Fifty subjects (37 male) participated in the experiment.
Subjects were students from a school in the Northeast. In-
formed consent was obtained, and participants were com-
pensated for their time. We used a randomized block de-
sign, with the three experimental conditions described pre-
viously. Each task lasted 20 seconds and a ten-second
rest period was placed after each task, allowing partici-
pants brains to return to baseline. In each measurement ses-
sion, there were four experimental blocks (3 conditions x 4
blocks = 12 tasks per session).
Each subject completed a total of four measurement ses-
sions (see Figure 1). The first and second sessions were
completed in the morning and afternoon, respectively, of
data collection day one. The third and fourth sessions were
completed in the morning and afternoon, respectively on
data collection day two, which was completed two weeks
after data collection day one.
Place probe on participant
Controlled Rest (20s)
Addition (20s)
Recite Phone Number(20s)
Rest (10s)
X 4 blocks
Remove Probe
Rest (10s)
Rest (10s)
Figure 1: Model of the chronology of tasks users undertook
during each measurement session.
All sessions were identical in their experiment layout and
the fNIRS cap was newly placed on the subject at the be-
ginning of each session, with the probe centered on each
participant’s forehead. Before beginning the first session
of the experiment, subjects were informed about the tasks
and given an opportunity to practice the tasks. They were
told that the tasks would appear in a random order. They
were then given the opportunity to ask any questions they
had about the experiment. Once it was clear that the partic-
ipants understood the tasks, the fNIRS cap was placed on
the participant and the PowerPoint presentation was started.
The fNIRS device used in this experiment was Hitachi Med-
ical’s ETG4000, with a sampling rate of 2Hz. Participants
wore a 52-channel cap (see Figure 2), comprised of 17 light
sources and 16 detectors.
3
Figure 2: A subject wearing the 52-channel fNIRS device in our lab
5. Performance Evaluation
5.1. Feature Extraction and Analysis
For each of the 52 channels, the raw light intensity
dataset was preprocessed to generate changes in oxyhe-
moglobin, deoxyhemoglobin, and total hemoglobin. All
subsequent analysis in this paper is based on the changes in
oxyhemoglobin (HbO) which were obtained while users
undertook the addition task (see Section 4.1 for details of
the tasks). We first carried out a min-max normalization to
scale all channels to the range 01before feature extrac-
tion. From each channel we then extracted eight features
from the 41 data points registered during each 20-second
instance of the mathematical task. These features were: (1)
standard deviation of first 10 points, (2) standard deviation
of the last 10 points, (3) standard deviation of the points
in the middle segment, (4) mean of the first 10 points, (5)
mean of the last 10 points, (6) mean of the points in the mid-
dle segment, (7) maximum value and (8) minimum value.
To determine which features had the highest discrimi-
native power, we used the relative mutual information, IR
between each feature and the class labels (also used in [17]).
Let Fdenote a vector containing the outputs of a given fea-
ture across the population and Cdenote a vector of class
labels. IRis computed as the ratio of I(F;C)to H(C)
where I(F;C)is the mutual information between Fand C
and H(C)is the entropy of C.IRvaries between 0 and 1,
with values tending towards 1 indicating highest discrimi-
native power. Fbeing a continuous variable, we discretized
it (using 20 equally spaced bins) before computing I(F;C).
Figure 3 shows how the discriminative power of two fea-
tures (i.e., the mean of the first ten points and the standard
deviation of the last ten points) varied across the 52 chan-
nels. Figure 3(a) shows the locations of each of the 52 chan-
nels relative to each other and relative to the position of the
eyes, while Figures 3(b) and 3(c) respectively show how IR
varied across the channel space for the two above mentioned
features. A dark red color indicates a region that had very
high discriminative power while a dark blue color indicates
a region of very low discriminative power. Note that the
IRvalues are represented as percentages (see color to IR
mapping at the extreme right of the plots). From the figure,
it is apparent that the mean of the first ten points separated
users better than the standard deviation of the last ten points
(which was one of our worst performing features). Regard-
less of the performance gap between the two features, the
figures reveal an interesting trait: regions around the lower
part of the face were more discriminative than those at the
upper parts. In Section 5.2 we leverage this information to
fine-tune our classification methodology. Note that the fea-
ture analysis described above was only applied to a training
subset of the dataset. The question of whether these feature
analysis findings generalize to the testing dataset is one of
those questions to be addressed in the next section.
5.2. Authentication Results
5.2.1 Mean and User-level Error Rates
To cut down on the high dimensional feature space and also
speed up the learning process, we dropped the two worst
performing features (on basis of mean IR) and retained
5 features per channel. Classifier training (i.e., template
building) was done based on data collected on the first day
while testing was done based on data collected on the sec-
ond day (recall experiment sessions in Section 4.2). To con-
duct impostor tests against a given user, we used 15 samples
randomly drawn from the other (49) users. To test a user’s
template against the user’s own samples (i.e., genuine test-
ing), we used all data provided by the user on the second
day.
Table 1 shows the mean Equal Error Rates (EER) ob-
tained with the SVM and Na¨
ıve Bayes classifiers when clas-
4
Red: 32, 44, 34, 45, 46, 47, 48, 49, 50, 21, 52, 36, 37, 38, 39, 40, 26, 27, 12, 15, 16, 21
White: 4, 5, 6, 9, 10, 19, 28, 29, 42, 43
Blue: 33, 35, 41, 22, 23, 24, 25, 30 31, 11, 13, 14, 17, 18, 20, 1, 2, 3, 7, 8
11
1
11
2
3
12
4
13
5
13
6
14
7
14
15
9
15
10
16
11
12
14
15
16
17
18
19
20
21
16
22
17
23
24
18
25
18
26
19
27
21
28
21
22
30
22
31
23
32
33
35
36
37
38
39
40
41
42
23
43
24
44
45
25
46
25
47
26
48
26
49
27
27
51
28
52
28
Right Eye
Left Eye
Channels
Detectors
Sources
43
44
45
46
47
48
49
51
52
32
33
35
36
37
38
39
40
41
42
22
23
24
25
26
27
28
30
31
11
12
14
15
16
17
18
19
20
21
1
2
3
4
5
6
7
9
10
(a) Locations of the 52 channels relative to each other and relative to the two eyes (approximately).
(b) Color map showing how the discriminative power of the mean of the first ten points varied across the 52 channels (or brain
regions). The red regions cover a large area relative to the blue regions, meaning that there is a good number of channels for which
this feature was highly discriminative.
(c) Color map showing how the discriminative power of the standard deviation of the last ten points varied across the 52 channels
(or brain regions). The blue regions cover a larger area relative to their coverage area in Figure 3(b), meaning that this feature was
not as discriminative as the feature represented in Figure 3(b).
Figure 3: Illustrating the discriminative power of two features (the mean of the first ten points and the standard deviation of
the last ten points) across the 52 channels. X is a measure of the horizontal distance from the top left corner of Figure 3(a)
while Y is a measure of the vertical distance from the same point. Locations such as the white space between channels 1,
11, 12 and 22 (see Figure 3(a)), are represented by the mean value of IRof the four neighboring channels for continuity and
clarity of the color map.
Classifier Mean EER % Change
All Channels Best Channels in EER
SVM 0.043 0.036 17.1
Na¨
ıve Bayes 0.063 0.046 28.3
Table 1: Comparing the mean EER of the two classification algorithms when all channels were used for classification with
the mean EER obtained when only a sub-set of channels (i.e., the most discriminative channels) were used.
sification was done based on two scenarios: when all chan-
nels were used for user classification and when only the
channels which exhibited the highest discriminative power
in the previous classification step were used. The EER is the
error rate at the threshold when the False Reject Rate (FRR)
equals the False Accept Rate (FAR) and is very widely used
to evaluate the performance of biometric authentication sys-
tems (e.g., see [15][17]). The EER ranges between 0 and 1
(or 0 and 100 on a percentage scale), with values close to
zero pointing to a system that performs well at separating
users.
When all channels were used, both classifiers had EERs
of less than 7% which reduced to under 5% when the best
channels were used. The reductions in EER seen when a
sub-set of channels (mostly the lower channels; see Figure
3) were used confirms the benefits of our feature analysis
5
step (i.e., a performance boost and a lower computation
overhead) and suggests that blood flow around the region
just above eye-level might be the best (relative to blood flow
at other regions of the head) at discriminating between users
undertaking simple mathematical tasks such as addition. A
more rigorous evaluation of these regions of interest and
how they relate to different tasks performed by the authen-
ticated user is part of our ongoing research. Overall, these
low error rates depict the promise of fNIRS as a continuous
authentication modality which could serve as extra layer of
security to the traditional security mechanisms e.g., pass-
words.
0 0.05 0.1 0.15 0.2 0.25 0.3
0
20
40
60
80
100
EER
CDF
Naive Bayes
SVM
Figure 4: CDF of the EERs obtained across the population
for the two classifiers.
While the above described results provide insights into
the mean error rates over the full population, it is interesting
to also explore how each of the 50 users in our experiment
performed. Figure 4 shows a CDF of the user-level EERs
across the population for both classifiers. For both classi-
fiers, over 60% of the population had EERs less than 0.05,
while under 20% of the population had EERs greater than
0.1. The large proportion of users with very low EERs indi-
cates that: (1) a significant number of users had consistent
brain activity patterns over the four measurement sessions
of our study, and that (2) the mean EER seen across the pop-
ulation could perhaps have been tremendously improved if
the small group of users who for some reason (e.g., not con-
centrating on the tasks) had inconsistent brain activity pat-
terns over the four sessions had been excluded prior to our
authentication evaluations.
5.2.2 Impact of Failure-to-Enroll Policy
The second conclusion made in the previous section
prompts the following question: How would the mean error
rate across the population change if users who exceeded a
certain threshold EER were systematically barred from en-
rolling onto the system? While we do not have explicit in-
formation on which users did not concentrate on the tasks as
instructed, it is reasonable to assume that a carefully tuned
failure-to-enroll policy would have a good chance of elim-
inating these kinds of users and any users who might have
concentrated on the tasks but perhaps just did not have the
0 0.05 0.1 0.15 0.2
0
10
20
30
40
50
# of Enrolled Users
0 0.05 0.1 0.15 0.2
0
0.01
0.02
0.03
0.04
0.05
Mean EER
Cut−off EER
# of Enrolled users
Mean EER
(a) Impact of Failure-to-enroll policy on the performance of the
Na¨
ıve bayes classifier.
0 0.05 0.1 0.15 0.2 0.25
20
40
60
# of Enrolled Users
0 0.05 0.1 0.15 0.2 0.25
0
0.02
0.04
Mean EER
Cut−off EER
# of Enrolled users
Mean EER
(b) Impact of Failure-to-enroll policy on the performance of the
SVM classifier.
Figure 5: Illustrating how a failure-to-enroll policy at dif-
ferent thresholds affects the classifier Error Rates.
required consistency of brain activity patterns. In a real
fNIRS-based authentication system, failure-to-enroll deci-
sions would be made based on observations (e.g., EERs)
made on preliminary data collected before the enrollment
phase.
Figure 5 shows how a failure-to-enroll policy impacted
the EERs of the two verifiers at different cut-off thresholds.
Figure 5(a) shows that when all users who had an EER ex-
ceeding 0.1 were excluded from the system, the mean EER
dropped from around 0.045 to just over 0.025 (an improve-
ment of over 40%) yet about 40 users (i.e., 80% of the orig-
inal population) were still able to enroll onto the system.
When the cut-off EER is reduced to 0.05, the mean EER of
the system reduces further to 0.01 (a change of 67% rela-
tive to the original EER) with 60% of the population able to
enroll. A slightly less dramatic trend is seen with the Na¨
ıve
Bayes classifier (Figure 5(b)), however the fact that barring
a small number of users from enrolling onto the system sig-
nificantly improves the performance of the fNIRS authenti-
cation system is still apparent.
6. Conclusions
In this paper, we evaluated fNIRS as a biometric au-
thentication modality based on data collected from 50 users
while they carried out simple arithmetic tasks. When we
used data from all 52 channels of the Hitachi Medical’s
ETG4000 fNIRS device, we obtained mean EERs of 0.043
and 0.063 respectively for the SVM and Na¨
ıve Bayes clas-
6
sification algorithms. When we used data from a sub-set of
channels having the highest individual discriminative power
(as measured from the Relative mutual information metric)
the mean EERs of the two classifiers respectively dropped
to 0.036 and 0.043. While there is still a need to evaluate
fNIRS for a wider range of mental tasks, these results sug-
gest that fNIRS holds promise as an AA modality. A major
part of our ongoing research is to carry out analysis on a
wider variety of tasks and to more rigorously evaluate the
dependence of authentication performance on specific brain
regions.
7. Acknowledgment
A. Serwadda, V. V. Phoha and S. Poudel were in part
supported by DARPA Active Authentication grant FA8750-
13-2-0274.
References
[1] Darpa-baa-13-16 active authentication (aa)
phase 2. https://www.fbo.gov/
index?s=opportunity&mode=form&id=
aa99ff477192956bd706165bda4ff7c4&tab=
core&_cview=1. Last accessed in April, 2013.
[2] Innovative cross-domain cyber reactive informa-
tion sharing (iccyris). https://www.fbo.gov/
index?s=opportunity&mode=form&id=
51735e41343a6e5ee5014b3a6c8bde3f&tab=
core&_cview=1. Last accessed in Jan, 2015.
[3] M. K. Abdullah, K. S. Subari, J. L. C. Loong, and N. N.
Ahmad. 4(8):917 – 921, 2010.
[4] S. Bharadwaj, H. Bhatt, M. Vatsa, and R. Singh. Periocu-
lar biometrics: When iris recognition fails. In Biometrics:
Theory Applications and Systems (BTAS), 2010 Fourth IEEE
International Conference on, pages 1–6, Sept 2010.
[5] J. Bringer and V. Despiegel. Binary feature vector finger-
print representation from minutiae vicinities. In Biometrics:
Theory Applications and Systems (BTAS), 2010 Fourth IEEE
International Conference on, pages 1–6, Sept 2010.
[6] R. Buxton. Introduction to Functional Magnetic Resonance
Imaging: Principles and Techniques. Cambridge University
Press, 2002.
[7] B. Chance, E. Anday, S. Nioka, S. Zhou, L. Hong,
K. Worden, C. Li, T. Murray, Y. Ovetsky, D. Pidikiti, and
R. Thomas. A novel method for fast imaging of brainfunc-
tion, non-invasively, with light. Opt. Express, 2(10):411–
423, May 1998.
[8] H. da Silva, A. Fred, A. Lourenco, and A. Jain. Finger ecg
signal for user authentication: Usability and performance.
In Biometrics: Theory, Applications and Systems (BTAS),
2013 IEEE Sixth International Conference on, pages 1–8,
Sept 2013.
[9] D. Heger, C. Herff, F. Putze, and T. Schultz. Towards bio-
metric person identification using fnirs. In Proceedings of
the Fifth International Brain-Computer Interface Meeting:
Defining the Future, 2013.
[10] L. M. Hirshfield, P. Bobko, A. Barelka, S. H. Hirshfield,
M. T. Farrington, S. Gulbronson, and D. Paverman. Using
noninvasive brain measurement to explore the psychologi-
cal effects of computer malfunctions on users during human-
computer interactions. Adv. in Hum.-Comp. Int., 2014:2:2–
2:2, Jan. 2014.
[11] L. M. Hirshfield, R. Gulotta, S. H. Hirshfield, S. W. Hincks,
M. Russell, R. Ward, T. Williams, and R. J. K. Jacob. This
is your brain on interfaces: enhancing usability testing with
functional near-infrared spectroscopy. In Proceedings of
the International Conference on Human Factors in Comput-
ing Systems, CHI 2011, Vancouver, BC, Canada, May 7-12,
2011, pages 373–382, 2011.
[12] K. Hollingsworth, K. Bowyer, and P. Flynn. Identifying
useful features for recognition in near-infrared periocular
images. In Biometrics: Theory Applications and Systems
(BTAS), 2010 Fourth IEEE International Conference on,
pages 1–8, Sept 2010.
[13] B. Johnson, T. Maillart, and J. Chuang. My thoughts are not
your thoughts. In Proceedings of the 2014 ACM Interna-
tional Joint Conference on Pervasive and Ubiquitous Com-
puting: Adjunct Publication, UbiComp ’14 Adjunct, pages
1329–1338, New York, NY, USA, 2014. ACM.
[14] K. S. Killourhy and R. A. Maxion. Comparing anomaly-
detection algorithms for keystroke dynamics. In DSN, pages
125–134, 2009.
[15] S. Marcel and J. d. R. Millan. Person authentication using
brainwaves (eeg) and maximum a posteriori model adapta-
tion. IEEE Trans. Pattern Anal. Mach. Intell., 29(4):743–
752, Apr. 2007.
[16] F. Mario, B. Ralf, M. Eugene, M. Ivan, and S. Dawn. Touch-
alytics: On the applicability of touchscreen input as a behav-
ioral biometric for continuous authentication. IEEE Transac-
tions on Information Forensics and Security, 8(1):136–148,
2013.
[17] R. Palaniappan. Identifying individuality using mental task
based brain computer interface. In Intelligent Sensing and
Information Processing, 2005. ICISIP 2005. Third Interna-
tional Conference on, pages 238–242, Dec 2005.
[18] R. Parasuraman and M. Rizzo. Neuroergonomics: The Brain
at Work. Oxford University Press, Inc., New York, NY, USA,
1 edition, 2008.
[19] I. Rigas, G. Economou, and S. Fotopoulos. Human eye
movements as a trait for biometrical identification. In Bio-
metrics: Theory, Applications and Systems (BTAS), 2012
IEEE Fifth International Conference on, pages 217–222,
Sept 2012.
[20] E. Solovey, P. Schermerhorn, M. Scheutz, A. Sassaroli,
S. Fantini, and R. Jacob. Brainput: Enhancing interactive
systems with streaming fnirs brain input. In Proceedings
of the SIGCHI Conference on Human Factors in Computing
Systems, CHI ’12, pages 2193–2202, New York, NY, USA,
2012. ACM.
[21] E. Treacy Solovey, D. Afergan, E. M. Peck, S. W. Hincks,
and R. J. K. Jacob. Designing implicit interfaces for phys-
iological computing: Guidelines and lessons learned using
fnirs. ACM Trans. Comput.-Hum. Interact., 21(6):35:1–
35:27, Jan. 2015.
7
... The system reached an accuracy that equal to 99.65 %, and the authors believed that 100% accuracy could be achieved by setting the threshold value. Serwadda et al. (2015) proposed a good solution for user authentication based on fNIRS signals. They applied their study on 50 subjects by using 52 channels, where the data was classified by using two classifiers which are SVM and naïve Bayes. ...
... This factor highly affects the consumer's acceptance of the system. However, the methods proposed in Chen et al. (2016), Heger et al. (2013), Jayarathne et al. (2016), Liew et al. (2015), Palaniappan and Mandic (2007), Poulos et al. (1999), Riera et al. (2008bRiera et al. ( , 2008a, La , Ruiz-Blondet et al. (2016), Shozawa et al. (2013) and Yeom et al. (2013) can be considered as simple methods, while the methods proposed at Ashby et al. (2011), Chuang et al. (2013), Palaniappan (2006) and Serwadda et al. (2015) require considerable time in preparation and maybe tiresome to the consumer. ...
Article
Several decades ago, attention was directed to biometrics as an alternative to passwords that can be discovered or ‘shoulder surfing’ by others. Various authentication methods have been provided that rely on the user’s biometrics, such as a fingerprint of a face, iris, voice, and others. Unfortunately, ways were found to imitate these visible fingerprints for using them in penetrations. Therefore, many researchers were interested in studying the possibility of using brainwaves for authentication purposes, as relying on hidden vital features increases the difficulty of breaking and imitation. This paper presents an analytical study of the proposed brainwave-based biometric authentication systems. It provides a comparison of signal acquisition methods for the brainwave-based authentication system. Also, the paper classifies brainwaves using its relevant features. It also presents the phases of the brainwave-based authentication system. Finally, it provides a detailed discussion of several factors that affect the accuracy of the brainwave-based authentication system results, and evaluate the compatibility level of the brainwave with the biometric requirements.
... These neural correlates of cognitive and perceptual states can be collected in real-time using neurological and physiological sensors for unobtrusively measuring humans' brain and physiological data including functional Near-Infrared Spectroscopy (fNIRS), Electroencephalography (EEG), electrodermal activity (EDA), electrocardiogram (ECG), and Respiration sensors [13]. fNIRS, a lightweight and non-invasive device, is gaining popularity in the Human-Computer Interaction community [45], as it offers several advantages over other braincomputer interface (BCI) technologies such as greater spatial resolution, higher signal-to-noise ratio, and better practicality for use in normal working conditions [20,43], although it is of course subject to other limitations [47,48], including in HRI contexts [8]. ...
Conference Paper
Full-text available
This paper explores the tradeoffs between different types of mixed reality robotic communication under different levels of user work-load. We present the results of a within-subjects experiment in which we systematically and jointly vary robot communication style alongside level and type of cognitive load, and measure subsequent impacts on accuracy, reaction time, and perceived workload and effectiveness. Our preliminary results suggest that although humans may not notice differences, the manner of load a user is under and the type of communication style used by a robot they interact with do in fact interact to determine their task effectiveness.
... AA techniques tackle this challenge by leveraging attributes of the user's interaction with the system to continuously authenticate the user. The vast majority of AA systems in the literature use behavioral patterns such as typing [17,20], gait [14,25], touch [8,23], eye movement patterns [5,18] and brain signal patterns [19,22] while a smaller segment of research has explored physical biometric modalities such as the face [7,15,16]. ...
Conference Paper
Full-text available
For security-sensitive Virtual Reality (VR) applications that require the end-user to enter authenticatioan credentials within the virtual space, a VR user's inability to see (potentially malicious entities in) the physical world can be discomforting, and in the worst case could potentially expose the VR user to visual attacks. In this paper, we show that the head, hand and (or) body movement patterns exhibited by a user freely interacting with a VR application contain user-specific information that can be leveraged for user authentication. For security-sensitive VR applications, we argue that such functionality can be used as an added layer of security that minimizes the need for entering the PIN. Based on a dataset of 23 users who interacted with our VR application for two sessions over a period of one month, we obtained mean equal error rates as low as 7% when we authenticated users based on their head and body movement patterns.
... The Hitachi 52-channel fNIRS device recording cognitive data while a participant completes a task. Due to the frequently-cited advantages for naturalistic, more ecologically-valid experimentation, fNIRS has been used in affective computing to investigate emotional brain states for HCI and BCI applications from authentication systems and entertainment to suspicion and marketing ( Abdul Serwadda, 2015;Bigliassi et al., 2015;Glotzbach et al., 2011;Noah et al., 2015). In medical contexts, fNIRS are used to develop emotion recognition models for diagnosing and evaluating autism spectrum disorders therapy ( Kaliouby, Picard, & Baron-Cohen, 2006). ...
Conference Paper
Full-text available
HCI research has increasingly incorporated the use of neurophysiological sensors to identify users’ cognitive and affective states. However, a persistent problem in machine learning on cognitive data is generalizability across participants. A proposed solution has been aggregating cognitive and survey data across studies to generate higher sample populations for machine learning and statistical analyses to converge in stable, generalizable results. In this paper, I argue that large data-sharing projects can facilitate the aggregation of results of brain imaging studies to address these issues, by smoothing noise in high-dimensional datasets. This paper contributes a small step towards large cognitive data sharing systems-design by proposing methods that facilitate the merging of currently incompatible fNIRS and FMRI datasets through term-based metadata analysis. To that end, I analyze 20 fNIRS studies of emotion using content analysis for: (1) synonym terms and definitions for ‘emotion,’ (2) the experimental stimuli, and (3) the use or non-use of self-report surveys. Results suggest that fNIRS studies of emotion have stable synonymy, using technical and folk conceptualizations of affective terms within and between publications to refer to emotion. The studies use different stimuli to elicit emotion but also show commonalities between shared use of standardized stimuli materials and self-report surveys. These similarities in conceptual synonymy and standardized experiment materials indicate promise for neuroimaging communities to establish open-data repositories based on metadata term-based analyses. This work contributes to efforts toward merging datasets across studies and between labs, unifying new modalities in neuroimaging such as fNIRS with fMRI datasets, increasing generalizability of machine learning models, and promoting the acceleration of science through open data-sharing infrastructure.
Article
Full-text available
Abnormal activity in the human brain is a symptom of epilepsy. Electroencephalogram (EEG) is a standard tool that has been widely used to detect seizures. A number of automated seizure detection systems based on EEG signal classification have been employed in present days, which includes a mixture of approaches but most of them rely on time signal features, time intervals or time frequency domains. Therefore, in this research, deep learning-based automated mechanism is introduced to improve the seizure detection accuracy from EEG signal using the Asymmetrical Back Propagation Neural Network (ABPN) method. The ABPN system includes four levels of repetitive training with weight adjustment, feed forward initialization, error and update weight and bias back-propagation. The proposed ABPN-based seizure detection system is validated using Physionet EEG dataset with matlab simulation, and the effectiveness of proposed seizure system is confirmed through simulation results. As compared with Deep Convolutional Neural Network (CNN) and Support Vector Machine–Particle Swarm Optimization (SVM-PSO)-based seizure detection system, the proposed ABPN system gives the best performance against various parameters. The sensitivity, specificity and accuracy are 96.32%, 95.12% and 98.36%, respectively.
Article
Full-text available
Biometric systems can identify individuals based on their unique characteristics. A new biometric based on hand synergies and their neural representations is proposed here. In this preliminary study ten subjects were asked to perform six hand grasps that are shared by most common activities of daily living. Their scalp electroencephalographic (EEG) signals were recorded using 32 scalp electrodes of which 18 task-relevant electrodes were used in feature extraction. In our previous work, we found that the hand kinematic synergies, or movement primitives, can be a potential biometric. In the current work, we combined the hand kinematic synergies and their neural representations to provide a unique signature for an individual as a biometric. Neural representations of hand synergies were encoded in spectral coherence of optimal EEG electrodes in the motor and parietal areas. An equal error rate of 7.5% was obtained at the system’s best configuration. Also, it was observed that the best performance was obtained when movement specific EEG signals in gamma frequencies (30-50Hz) were used as features. The implications of these first results, improvements, and their applications in the near future are discussed.
Article
Nowadays, mobile devices are often equipped with high-end processing units and large storage space. Mobile users usually store personal, official, and large amount of multimedia data. Security of such devices are mainly dependent on PIN (personal identification number), password, bio-metric data, or gestures/patterns. However, these mechanisms have a lot of security vulnerabilities and prone to various types of attacks such as shoulder surfing. The uniqueness of Electroencephalography (EEG) signal can be exploited to remove some of the drawbacks of the existing systems. Such signals can be recorded and transmitted through wireless medium for processing. In this paper, we propose a new framework to secure mobile devices using EEG signals along with existing pattern-based authentication. The pattern based authentication passwords are considered as identification tokens. We have investigated the use of EEG signals recorded during pattern drawing over the screen of the mobile device in the authentication phase. To accomplish this, we have collected EEG signals of 50 users while drawing different patterns. The robustness of the system has been evaluated against 2400 unauthorized attempts made by 30 unauthorized users who have tried to gain access of the device using known patterns of 20 genuine users. EEG signals are modeled using Hidden Markov Model (HMM), and using a binary classifier implemented with Support Vector Machine (SVM) to verify the authenticity of a test pattern. Verification performances are measured using three popular security matrices, namely Detection Error Trade-off (DET), Half Total Error Rate (HTER), and Receiver Operating Characteristic (ROC) curves. Our experiments revel that, the method is promising and can be a possible alternative to develop robust authentication protocols for hand-held devices.
Article
Full-text available
Authenticating users of computer systems based on their brainwave signals is now a realistic possibility, made possible by the increasing availability of EEG (electroencephalography) sensors in wireless headsets and wearable devices. This possibility is especially interesting because brainwave-based authentication naturally meets the criteria for two-factor authentication. To pass an authentication test using brainwave signals, a user must have both an inherence factor (his or her brain) and a knowledge factor (a chosen pass-thought). In this study, we investigate the extent to which both factors are truly necessary. In particular, we address the question of whether an attacker may gain advantage from information about a given target's secret thoughts.
Article
Full-text available
In today's technologically driven world, there is a need to better understand the ways that common computer malfunctions affect computer users. These malfunctions may have measurable influences on computer user's cognitive, emotional, and behavioral responses. An experiment was conducted where participants conducted a series of web search tasks while wearing functional near-infrared spectroscopy (fNIRS) and galvanic skin response sensors. Two computer malfunctions were introduced during the sessions which had the potential to influence correlates of user trust and suspicion. Surveys were given after each session to measure user's perceived emotional state, cognitive load, and perceived trust. Results suggest that fNIRS can be used to measure the different cognitive and emotional responses associated with computer malfunctions. These cognitive and emotional changes were correlated with users' self-report levels of suspicion and trust, and they in turn suggest future work that further explores the capability of fNIRS for the measurement of user experience during human-computer interactions.
Conference Paper
Full-text available
Over the past few years, the evaluation of Electrocardio-graphic (ECG) signals as a prospective biometric modality has revealed promising results. Given the vital and continuous nature of this information source, ECG signals offer several advantages to the field of biometrics; yet, several challenges currently prevent the ECG from being adopted as a biometric modality in operational settings. These arise partially due to ECG signal's clinical tradition and intru-siveness, but also from the lack of evidence on the permanence of the ECG templates over time. The problem of in-trusiveness has been recently overcome with the “off-the-person” approach for capturing ECG signals. In this paper we provide an evaluation of the permanence of ECG signals collected at the fingers, with respect to the biometric authentication performance. Our experimental results on a small dataset suggest that further research is necessary to account for and understand sources of variability found in some subjects. Despite these limitations, “off-the-person” ECG appears to be a viable trait for multi-biometric or standalone biometrics, low user throughput, real-world scenarios.
Article
Full-text available
We investigate whether a classifier can continuously authenticate users based on the way they interact with the touchscreen of a smart phone. We propose a set of 30 behavioral touch features that can be extracted from raw touchscreen logs and demonstrate that different users populate distinct subspaces of this feature space. In a systematic experiment designed to test how this behavioral pattern exhibits consistency over time, we collected touch data from users interacting with a smart phone using basic navigation maneuvers, i.e., up-down and left-right scrolling. We propose a classification framework that learns the touch behavior of a user during an enrollment phase and is able to accept or reject the current user by monitoring interaction with the touch screen. The classifier achieves a median equal error rate of 0% for intra-session authentication, 2%-3% for inter-session authentication and below 4% when the authentication test was carried out one week after the enrollment phase. While our experimental findings disqualify this method as a standalone authentication mechanism for long-term authentication, it could be implemented as a means to extend screen-lock time or as a part of a multi-modal biometric authentication system.
Article
A growing body of recent work has shown the feasibility of brain and body sensors as input to interactive systems. However, the interaction techniques and design decisions for their effective use are not well defined. We present a conceptual framework for considering implicit input from the brain, along with design principles and patterns we have developed from our work. We also describe a series of controlled, offline studies that lay the foundation for our work with functional near-infrared spectroscopy (fNIRS) neuroimaging, as well as our real-time platform that serves as a testbed for exploring brain-based adaptive interaction techniques. Finally, we present case studies illustrating the principles and patterns for effective use of brain data in human-computer interaction. We focus on signals coming from the brain, but these principles apply broadly to other sensor data and in domains such as aviation, education, medicine, driving, and anything involving multitasking or varying cognitive workload.
Book
Functional Magnetic Resonance Imaging (fMRI) has become a standard tool for mapping the working brain's activation patterns, both in health and in disease. It is an interdisciplinary field and crosses the borders of neuroscience, psychology, psychiatry, radiology, mathematics, physics and engineering. Developments in techniques, procedures and our understanding of this field are expanding rapidly. In this second edition of Introduction to Functional Magnetic Resonance Imaging, Richard Buxton – a leading authority on fMRI – provides an invaluable guide to how fMRI works, from introducing the basic ideas and principles to the underlying physics and physiology. He covers the relationship between fMRI and other imaging techniques and includes a guide to the statistical analysis of fMRI data. This book will be useful both to the experienced radiographer, and the clinician or researcher with no previous knowledge of the technology.
Book
Functional Magnetic Resonance Imaging (fMRI) has become a standard tool for mapping the working brain's activation patterns, both in health and in disease. It is an interdisciplinary field and crosses the borders of neuroscience, psychology, psychiatry, radiology, mathematics, physics and engineering. Developments in techniques, procedures and our understanding of this field are expanding rapidly. In this second edition of Introduction to Functional Magnetic Resonance Imaging, Richard Buxton – a leading authority on fMRI – provides an invaluable guide to how fMRI works, from introducing the basic ideas and principles to the underlying physics and physiology. He covers the relationship between fMRI and other imaging techniques and includes a guide to the statistical analysis of fMRI data. This book will be useful both to the experienced radiographer, and the clinician or researcher with no previous knowledge of the technology.
Conference Paper
This research work proposes an innovative processing scheme for the exploitation of eye movement dynamics on the field of biometrical identification. As the mechanisms that derive eye movements highly depend on each person's idiosyncrasies, cues that reflect at a certain extent individual characteristics may be captured and subsequently deployed for the implementation of a robust identification system. Our methodology involves the employment of a non - parametric statistical test, the multivariate Wald - Wolfowitz test (WW-test), in order to compare the distributions of saccadic velocity and acceleration features, which are extracted while a person fixates on visual stimuli. In the evaluation section we use two publicly available datasets that supply recorded eye movements from a number of subjects during the observation of a moving spot on a computer screen. The resulting identification rates exhibit the efficacy of the suggested scheme to adequately segregate people according to their eye movement traits.
Conference Paper
This paper describes the Brainput system, which learns to identify brain activity patterns occurring during multitasking. It provides a continuous, supplemental input stream to an interactive human-robot system, which uses this information to modify its behavior to better support multitasking. This paper demonstrates that we can use non-invasive methods to detect signals coming from the brain that users naturally and effortlessly generate while using a computer system. If used with care, this additional information can lead to systems that respond appropriately to changes in the user's state. Our experimental study shows that Brainput significantly improves several performance metrics, as well as the subjective NASA-Task Load Index scores in a dual-task human-robot activity.
Conference Paper
Keystroke dynamics-the analysis of typing rhythms to discriminate among users-has been proposed for detecting impostors (i.e., both insiders and external attackers). Since many anomaly-detection algorithms have been proposed for this task, it is natural to ask which are the top performers (e.g., to identify promising research directions). Unfortunately, we cannot conduct a sound comparison of detectors using the results in the literature because evaluation conditions are inconsistent across studies. Our objective is to collect a keystroke-dynamics data set, to develop a repeatable evaluation procedure, and to measure the performance of a range of detectors so that the results can be compared soundly. We collected data from 51 subjects typing 400 passwords each, and we implemented and evaluated 14 detectors from the keystroke-dynamics and pattern-recognition literature. The three top-performing detectors achieve equal-error rates between 9.6% and 10.2%. The results-along with the shared data and evaluation methodology-constitute a benchmark for comparing detectors and measuring progress.