ArticlePDF Available

Smartphone User Identity Verification Using Gait Characteristics

MDPI
Symmetry
Authors:

Abstract

Smartphone-based biometrics offers a wide range of possible solutions, which could be used to authenticate users and thus to provide an extra level of security and theft prevention. We propose a method for positive identification of smartphone user's identity using user's gait characteristics captured by embedded smartphone sensors (gyroscopes, accelerometers). The method is based on the application of the Random Projections method for feature dimensionality reduction to just two dimensions. Then, a probability distribution function (PDF) of derived features is calculated, which is compared against known user PDF. The Jaccard distance is used to evaluate distance between two distributions, and the decision is taken based on thresholding. The results for subject recognition are at an acceptable level: we have achieved a grand mean Equal Error Rate (ERR) for subject identification of 5.7% (using the USC-HAD dataset). Our findings represent a step towards improving the performance of gait-based user identity verification technologies.
symmetry
S
S
Article
Smartphone User Identity Verification Using
Gait Characteristics
Robertas Damaševiˇcius 1, *, Rytis Maskeli¯
unas 2, Algimantas Venˇckauskas 3
and Marcin Wo´zniak 4
1Department of Software Engineering, Kaunas University of Technology, 44249 Kaunas, Lithuania
2Department of Multimedia Engineering, Kaunas University of Technology, 44249 Kaunas, Lithuania;
rytis.maskeliunas@ktu.lt
3Department of Computer Science, Kaunas University of Technology, 44249 Kaunas, Lithuania;
algimantas.venckauskas@ktu.lt
4Institute of Mathematics, Silesian University of Technology, Kaszubska 23, 44-100 Gliwice, Poland;
marcin.wozniak@polsl.pl
*Correspondence: robertas.damasevicius@ktu.lt; Tel.: +370-609-43772
Academic Editor: Young-Sik Jeong
Received: 17 June 2016; Accepted: 21 September 2016; Published: 29 September 2016
Abstract:
Smartphone-based biometrics offers a wide range of possible solutions, which could be used
to authenticate users and thus to provide an extra level of security and theft prevention. We propose
a method for positive identification of smartphone user’s identity using user’s gait characteristics
captured by embedded smartphone sensors (gyroscopes, accelerometers). The method is based
on the application of the Random Projections method for feature dimensionality reduction to just
two dimensions. Then, a probability distribution function (PDF) of derived features is calculated,
which is compared against known user PDF. The Jaccard distance is used to evaluate distance between
two distributions, and the decision is taken based on thresholding. The results for subject recognition
are at an acceptable level: we have achieved a grand mean Equal Error Rate (ERR) for subject
identification of 5.7% (using the USC-HAD dataset). Our findings represent a step towards improving
the performance of gait-based user identity verification technologies.
Keywords: user identity verification; smartphone security; gait characteristics; Random Projections
1. Introduction
Recent research has indicates that the number of smartphone users is expected to reach 2.08 billion
in 2016 (Statista: www.statista.com). The advanced features and high performance characteristics
allow smartphones to be used not only as tool of communication, but also in business applications,
to store personal data (contacts, calendar, photos, location data, etc.) and to access personal data
over the Internet (social networks, e-mail servers, etc.). The access to personal data provided by
smartphones may raise various security problems for smartphone users related to privacy and personal
or business data protection or the risk of being impersonated. Physical security of the device should be
ensured as smartphones can be left, lost or stolen. According to consumer reports [
1
], about 10% of
U.S. smartphone owners have been the victims of phone theft while 68% of theft victims were unable
to recover their device. Therefore, the development of new reliable and robust authentication and
verification methods against phone hijackers is urgently needed.
The most common authentication mechanisms now are still based on typing a password, PIN or
assembling a graphical puzzle. Security of the password-based authentication systems mainly depends
upon keeping the passwords secret. While these authentication methods each have their advantages
such as high accuracy, they still require the users to memorize the password or a puzzle. The problem
is that the required secret is either secure or easy-to-remember. When managing multiple passwords
Symmetry 2016,8, 100; doi:10.3390/sym8100100 www.mdpi.com/journal/symmetry
Symmetry 2016,8, 100 2 of 20
or puzzles for different systems or devices (one for each device), it is a significant overhead to users
when trying to remember each password. In practice, users often use the same password for different
applications and choose easy to remember passwords such as those that contain semantic information
like birthdays, family names, pet names, etc. In order to use different and random looking passwords
users usually write down their passwords. As a result, the search space for an attacker is decreased.
These practices make the password based authentication mechanisms vulnerable to dictionary attacks.
Furthermore, entering a password may be time-consuming, error-prone and cumbersome,
especially while using the phone on the go, and that is why many users are not using passwords.
Drawing a graphical pattern on screen may reduce the burden, but it still requires explicit user
interaction and is not very convenient in high mobility scenarios. Another target group of users who
find passwords difficult are people suffering from memory loss or hand tremor. Finally, passwords are
only artificially associated with users and cannot truly verify the identity of individuals. Consequently,
they can be spied upon, guessed, lost or stolen, resulting in impersonation attacks and other security
breaches. As a result, a significant part of users consider the password/PIN-based authentication as
inconvenient and do not use it (see the results of a survey presented in [
2
]). According to a survey
presented in [
3
], only 13% of the participants secure their phones with a PIN or visual code during
standby, or deactivate the authentication methods of their mobile devices citing usability issues as the
main reason for it.
Other authentication methods such as face recognition, speech recognition or fingerprint scans
are not widely used. For face recognition, the main concerns are the restricted memory and
computational power available, as well as the uncontrolled ambient environment. For a continuous
authentication, the speech during phone calls is analyzed and the authentication is performed in the
background, which also introduces considerable computational overhead and reduces battery life.
Fingerprint scanning requires an extra high-cost sensor that is not needed by the average end-user.
Capturing high-quality finger photos for fingerprint recognition using existing phone cameras is still
a problem for current mobile phones.
Recently, biometric methods of authentication have started to be used, including face, voice or
fingerprint recognition [
4
]. These authentication methods do not require memorization and depend
upon unique biometric characteristics of a user. However, they heavily depend upon ambient
conditions, e.g., poor lighting or ambient noise may prevent the device from correctly recognizing
the face or voice of its user. Furthermore, after the user logs in, there are no further authentication
procedures employed until the phone locks or switches off. A hijacker may gain access to the phone if
the owner leaves it unattended. Another problem is if a hijacker steals the phone but does not try to
log in, no active authentication measures are imitated by the device.
To overcome these problems, smartphone-based biometrics offers a wide range of possible
solutions, which could be used to verify user’s identity and thus to provide an extra level of security
and theft prevention. One of such solutions is the ability recognize human gait (a person’s walking
style) using a set of in-built sensors such as accelerometers. Considering that each person has a unique
way of walking containing user-distinctive patterns, inertial sensors embedded in smartphones can be
applied to the problem of gait recognition in security-related applications [
5
]. Human gait has been
widely acknowledged by researchers as a biometric trait that can be used for authentication purposes
via recognizing individuals based on their behavioral or physiological characteristics [
6
]. One of the
advantages of human gait is that it can be passively observable, unobtrusive, implicit, continuous and
concurrent, and it is easily measured as a user carries his phone around. When the user is walking,
the phone will be recognizing him based on his gait, so that he can directly use the services of the
phone without any further authentication. Hence, in contrast to password or PIN based authentication,
it incurs no extra effort for the user.
Gait recognition can be executed continuously in the background while the user is walking and
during log in, no additional calculations are required, thus avoiding annoying delays. Furthermore,
when several users use the same device, biometric authentication can be used to automatically
Symmetry 2016,8, 100 3 of 20
personalize the services provided by a mobile device [
7
]. Only if the user is not recognized
by his walk or is not walking, an active authentication via PIN or graphical puzzle may be
activated [
8
]. Gait-based authentication is difficult to attack as it is very difficult to emulate the
gait characteristics of a legitimate user, and by trying to do so the attacker will probably appear even
more suspicious. The deployment of gait-based user identification is cost efficient as it does not require
deploying additional hardware and increasing product cost (accelerator sensors are available) on most
of smartphones.
On the other hand, gait-based authentication has its own weakness, which have been acknowledged
many times before: gait can be affected by clothes, shoes, carrying objects, physical changes to the
state of the user (injury, weight gain/loss, aging), phone placement and orientation, environmental
context (conditions of walking surface), stimulants (drugs and alcohol), and psychological state
(mood). Additional user time may be required for dealing with Failure To Acquire (FTA) errors.
These weaknesses reduces the discriminating power of gait as a biometric, but still does not deny
the use it as a complementary security mechanism or for providing access to low security data and
resources (such as music files).
Summarizing, the authentication via accelerometer-based biometric gait recognition offers
a user-friendly alternative to common authentication methods on smartphones. It has the great
advantage that the authentication can be performed without user interaction. Furthermore, it can
be used as one of security levels in a multi-level security system that allows to trade-off usability
and accessibility. From 2003, implicit sensor-based gait recognitions are initially proposed to support
existing authentication mechanisms that are obtrusive and inconvenient in frequent use on mobile
phones and achieved promising results.
In 2003, Wang et al. [
9
] apply a maximum-based cycle extraction to the vertical acceleration
measured using the data of a sensor attached to the back of the waist. The cycles are segmented based
on identified extreme points, and different features, like relative time in cycle or slope of straight line
between two endpoints, are extracted. Classification based on the Dynamic Time Warping (DTW)
distance resulted in an EER of 5% for a dataset containing data of 24 subjects.
Ailisto et al. [
10
] propose the gait authentication based on wearable accelerometer. Acceleration
data were analyzed to find individual steps, normalize, and align them with the template. Then,
cross-correlation was applied as a measure of similarity, reaching 6.4% of EER.
Thang et al. [
11
] use accelerator data in time domain to construct gait templates and DTW to
evaluate the similarity score. Features in frequency domain are classified using Support Vector Machine
(SVM), achieving the accuracy of 79.1% and 92.7%, respectively.
Rong et al. [
12
] applied a simple cycle extraction method based on the number of zero-crossings.
The average cycle was computed by normalizing the detected cycles to equal length using DTW.
The gait template was constructed by concatenating average cycles for each acceleration direction.
The same DTW algorithm is used for comparison, and an EER of 5.6% was obtained on a database
of 21 subjects.
Pan et al. [
13
] attached accelerometers (Wii remote) to five different parts of the body,
i.e., upper
arm, wrist, thigh, ankle, and waist and reached recognition rates of 96.7% when using
all five channels for a database containing 30 subjects.
Sprager [
14
] used a mobile phone (Nokia N95) attached to the hip and divided the recorded
forward-backward and vertical acceleration data into cycles. Cumulants of order 1 to 4 were
extracted from the signals and transformed into a feature vector. SVMs were used for classification,
reaching a recognition rate of 92.9% for a dataset of six subjects.
Bachlin et al. [
15
] evaluated the influence of shoes, weight, and time on gait. Four different feature
types are computed for different signal types: segments containing 64 data samples with and without
normalization of step length, and FFT coefficients from a jumping window containing 256 samples
starting at regular points or at the heel strike. Similarity between data was computed using one-way
Analysis of Variance (ANOVA) and by determining the percentage of positions in the feature vector
Symmetry 2016,8, 100 4 of 20
for which the ANOVA showed that they are not statistically significant different. EER of 2.8% was
reported for the same-day recognition task, while EER increased to 21.3% when mixed data of several
days was used using data of five subjects.
Trivino et al. [
16
] used a linguistic model based on the computational theory of perceptions.
The perception of the signal was modeled by a Fuzzy Finite State Machine (FFSM) and the model was
expressed via linguistic terms. The method was tested on a database containing same-day data of
eleven subjects and an EER of 3% was obtained.
Frank et al. [
17
] used time-delay embedding models created from the acceleration data collected
from 25 individuals via a smartphone placed in the trouser pocket. For each subject, the probe segments
were mapped into the model-space. Considering four nearest neighbors, scores are calculated for each
mapped test segment. Classification was based on the highest average score, resulting in a perfect
classification result for the given test set.
Nickel et al. [
18
] use the Mel- and Bark-frequency cepstral coefficients (MFCC, and BFCC)
and a SVM classifier. The proposed approach showed competitive recognition performance,
yielding 5.9% false match rate (FMR) and 6.3% false non-match rate (FNMR).
Kwapisz [
7
] used J48 and Neural Net classifiers for classifying multiple activity data such as
from walking, jogging, going up stairs, and going down stairs of 36 subjects. Forty-three features,
were generated for each axis for each feature-type as follows: average, average acceleration value,
standard deviation, average absolute difference, average resultant acceleration, time between peaks,
and binned distribution. They were able to identify a person walking with a positive authentication
rate of 82.1%–92.9%, respectively.
Kobayashi et al. [
19
] constructed a feature extraction model based on Fourier transform features
derived from 58 subjects who held the phone in a hand while walking. The model resulted in accuracy
between 45% and 50%.
Juefei-Xu et al. [
20
] used SVM, a time frequency spectrogram model and a cyclo-stationary model
using data collected from 36 subjects. The best results were 99.4% verification rate with normal walking
and 96.8% verification rate with fast walking using both accelerometer and gyroscope data.
Hoang et al. [
21
] used the gait template matching approach to compare data collected from
38 subjects on four consecutive gait cycles and reported EER of 3.5%.
Derawi and Bours [
22
] proposed a feature extraction method that used time interpolation to find
the average cycle of a subject for authentication. The result of this study was an EER of 20.1% for
a dataset of 10 subjects.
Wolff [
23
] used a Gaussian distribution model constructed using the variance in acceleration and
orientation across the three dimensions (x, y, and z) and has achieved the subject classification accuracy
of 83%.
Lu et al. [
24
] used a Gaussian Mixture Model—Universal Background Model (GMM-UBM)
framework for gait verification. The authentication is done by comparing the likelihood score
from a user gait model (representing the user’s specific gait pattern) and a universal background
model of all human gait patterns. The gait model used the following features of signals: mean,
variance, skewness, kurtosis, energy, mean crossing rate, energy ratio between vertical and horizontal
components, spectrum peak, spectral entropy, ratio between low frequency band energy and high
frequency band energy, compressed sub-band cepstral coefficients and coefficients of autocorrelation.
Lin et al. [
25
] propose a system for gait recognition analysis. The
αβ
filters were used to improve
the recognition and, Empirical Mode Decomposition (EMD) was used to filter the noise. Then Linear
Discriminant Analysis (LDA) was applied to the Fourier transform energy spectrum for training
and recognition.
Johnston et al. [
26
] used smartwatches to collect gait data and classical induction algorithms from
Weka package for subject authentication achieving the EER rate of 2.6% when using features derived
from the accelerometer data and EER of 8.1% when using data derived from gyroscope sensor.
Symmetry 2016,8, 100 5 of 20
Attal et al. [
27
] have presented a comparative study of classifiers (k-Nearest Neighbors (kNN),
Gaussian Mixture Models, Support Vector Machines (SVM), Random Forest, k-means and Hidden
Markov Models (HMM)), which can be used to recognize human activities from wearable inertial
sensor data. Best results were obtained with kNN classifier reaching accuracy of 96.53%.
Abidine et al. [
28
] have discussed feature extraction methods (Principal Component Analysis
(PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA)) and their
relevance to improve the classification accuracy of the existing daily living activity recognition systems,
and achieved 77.0% accuracy using Weighted Support Vector Machines (WSVM) classifier.
The related works are summarized in Table 1.
Table 1. Summary of related work in human activity recognition domain.
Ref. Features Methods Subjects Results
Wang et al. [9]
Domain specific (e.g., relative time in
cycle or slope of straight line between
two endpoints)
Maximum-based cycle
extraction, Dynamic Time
Warping (DTW) distance
24 5% (EER)
Ailisto et al. [10]Averaged x (forward), and z (vertical)
acceleration signals
Template matching,
cross-correlation 36 6.4% (EER)
Thang et al. [11] Time and frequency domain features
Gait templates, Dynamic Time
Warping (DTW), Support Vector
Machine (SVM)
11
DTW: 79.1%;
SVM: 92.7%
(accuracy)
Rong et al. [12] Acceleration Zerocrossing-based cycle
extraction, DTW 21 5.6% (EER)
Pan et al. [13] Extrema in acceleration data space Difference-of-Gaussian filtering,
Nearest Neighbors 30 96.7% (accuracy)
Sprager [14] Order 1–4 cumulants of acceleration data Support Vector Machine (SVM) 6 92.9% (accuracy)
Bachlin et al. [15] FFT coefficients FFT, one-way Analysis of
Variance (ANOVA) 5 2.8%–21.3% (EER)
Trivino et al. [16]Vertical acceleration, lateral acceleration,
and acceleration in the progress direction.
Fuzzy Finite State Machine
(FFSM), linguistic model 11 3% (EER)
Frank et al. [17] acceleration data time-delay embedding models,
k Nearest Neighbors 25 Perfect
classification
Nickel et al. [18]Mel- and Bark-frequency cepstral
coefficients (MFCC, BFCC) SVM classifier 48 5.9% (FMR);
6.3% (FNMR)
Kwapisz [7]
Average, average acceleration value,
standard deviation, average absolute
difference, average resultant acceleration,
time between peaks, binned distribution
J48 and Neural Net classifiers 36
82.1%–92.9%
(positive
authentication rate)
Kobayashi
et al. [19]
Cross-correlations of Fourier
transform coefficients
Multi-class classification by
nearest means in Fisher
discriminant space and
majority voting
58 45%–50%
(accuracy)
Juefei-Xu et al. [20] Accelerometer and gyroscope data
SVM, a time frequency
spectrogram model and
a cyclo-stationary model
36 96.8%–99.4%
(accuracy)
Hoang et al. [21]Magnitude of the acceleration forces
acting on three directions (x, y and z) Gait template matching 38 3.5% (EER)
Derawi and
Bours [22]Magnitude of the acceleration
Weighted moving average
(WMA) filter, cycle detection,
Manhattan distance
metric, LibSVM
10
99.6%—same
subject,
87.6%—cross-subject
(accuracy)
Wolff [23]Variance in acceleration and orientation
across the three dimensions (x, y, and z) Gaussian distribution model 83% (accuracy)
Lu et al. [24]
Mean, variance, skewness, kurtosis,
energy, mean crossing rate, energy ratio
between vertical & horizontal
components, spectrum peak, spectral
entropy, ratio between low and high
frequency band energy, compressed
sub-band cepstral coefficients,
compressed sub-band cepstral
coefficients of autocorrelation
Gaussian Mixture
Model—Universal Background
Model (GMM-UBM)
47 14% (EER)
Lin et al. [25]
Spectral energy diagrams of pitch, roll,
acceleration X, acceleration Y,
and acceleration Z
αβ filtering, Empirical Mode
Decomposition (EMD), Fourier
Transform, Linear Discriminant
Analysis (LDA)
10 90% (recognition
rate)
Johnston et al. [26]
Average sensor value, standard deviation,
average absolute difference between the
200 values and the mean of these values,
time between peaks (each axis),
binned distribution, average
resultant acceleration
Multilayer Perceptron (MLP),
Random Forest, Rotation Forest,
and Naive Bayes
59 2.6%–8.1% (EER)
Symmetry 2016,8, 100 6 of 20
The aim of this paper is to analyze the existing research on gait recognition using features derived
from the acceleration and gyroscope sensors of a smartphone and propose a method for gait-based
user identity verification. The novelty of the paper is the application of Random Projections for gait
feature dimensionality reduction in the context of user identity verification.
The structure of the remaining parts of the paper is as follows. Section 2describes
the methodological background for human activity recognition and subject identification using
smartphone or wearable sensors and the proposed method. Section 3describes our experiments
using the USC-HAD dataset and the obtained results. Section 4presents the evaluation and discussion.
Finally, Section 5presents conclusions and considers future work.
2. Materials and Methods
2.1. Background
As a methodological background of our analysis we adapt the concept of Context Pyramid
adopted from Pei et al. [
29
]. We describe the domain of Human Activity Recognition (HAR) using
a six level Context Pyramid: Raw Sensor Data, Patterns, Activities, Actions, and Context (see Figure 1).
In this paper, we focus on the three lowest levels of the Context pyramid: sensor data, features,
and activities. Raw data from diverse sensors are the foundation of the Context. Based on the Raw Data,
we can extract activity features such as spatial coordinates, orientation, movement direction, speed and
acceleration. Activities define the higher levels of the Context Pyramid (the state, position and context
of the person). Hereinafter we continue with the analysis of the HAR tasks (Section 3.2) and human
activities (Section 3.3).
Symmetry2016,8,100 6of20
Theaimofthispaperistoanalyzetheexistingresearchongaitrecognitionusingfeatures
derivedfromtheaccelerationandgyroscopesensorsofasmartphoneandproposeamethodforgait
baseduseridentityverification.ThenoveltyofthepaperistheapplicationofRandomProjectionsfor
gaitfeaturedimensionalityreductioninthecontextofuseridentityverification.
Thestructureoftheremainingpartsofthepaperisasfollows.Section2describesthe
methodologicalbackgroundforhumanactivityrecognitionandsubjectidentificationusing
smartphoneorwearablesensorsandtheproposedmethod.Section3describesourexperimentsusing
theUSCHADdatasetandtheobtainedresults.Section4presentstheevaluationanddiscussion.
Finally,Section5presentsconclusionsandconsidersfuturework.
2.MaterialsandMethods
2.1.Background
AsamethodologicalbackgroundofouranalysisweadapttheconceptofContextPyramid
adoptedfromPeietal.[29].WedescribethedomainofHumanActivityRecognition(HAR)usinga
sixlevelContextPyramid:RawSensorData,Patterns,Activities,Actions,andContext(seeFigure1).
Inthispaper,wefocusonthethreelowestlevelsoftheContextpyramid:sensordata,features,and
activities.RawdatafromdiversesensorsarethefoundationoftheContext.BasedontheRawData,
wecanextractactivityfeaturessuchasspatialcoordinates,orientation,movementdirection,speed
andacceleration.ActivitiesdefinethehigherlevelsoftheContextPyramid(thestate,positionand
contextoftheperson).HereinafterwecontinuewiththeanalysisoftheHARtasks(Section3.2)and
humanactivities(Section3.3).
Figure1.Contextpyramid(adaptedfrom[25]).
2.2.TasksofHAR
WedefinefourmaintasksofHAR(seeFigure2):
Task1.Basicactivityrecognition:Basicactivitiesarelowlevelactivitiessuchaswalking,sitting,
standing,i.e.,activitieswhichcanbecharacterizedbystatisticalsequenceofbodymotionsor
gestures,andwhichtypicallylastbetweenfewsecondsandseveralminutes.Onanevensmallertime
scale,briefanddistinctbodymotionssuchastakingasteparesometimesreferredtoasactions,
movements,gestures,ormotifs[30].Lowlevelactivitiesareonlylooselydefinedsincethereisno
generallyaccepteddefinitionofthesetermsintheactivityrecognitioncommunity.Bobick[31]
attemptedtodifferentiate“action”asahighersemanticlevelthanan“activity”basedonthe
occurrencesofmovementsandinteractions,inwhichhehasdefinedan“action”.Incontrast,
Govindaraju[32]definedan“action”asanatomicmotionpatternthatisoftengesturelikeandhasa
specifictrajectory(e.g.,wavearm),whereasan“activityisaseriesofactionsperformedinan
orderedsequencethatisdependentonhumanmotionpatterns.
Figure 1. Context pyramid (adapted from [25]).
2.2. Tasks of HAR
We define four main tasks of HAR (see Figure 2):
Task 1. Basic activity recognition: Basic activities are low-level activities such as walking, sitting,
standing, i.e., activities which can be characterized by statistical sequence of body motions or gestures,
and which typically last between few seconds and several minutes. On an even smaller time scale,
brief and distinct body motions such as taking a step are sometimes referred to as actions, movements,
gestures, or motifs [
30
]. Low-level activities are only loosely defined since there is no generally accepted
definition of these terms in the activity recognition community. Bobick [
31
] attempted to differentiate
“action” as a higher semantic level than an “activity” based on the occurrences of movements and
interactions, in which he has defined an “action”. In contrast, Govindaraju [
32
] defined an “action”
as an atomic motion pattern that is often gesture-like and has a specific trajectory (e.g., wave arm),
whereas an “activity” is a series of actions performed in an ordered sequence that is dependent on
human motion patterns.
Symmetry 2016,8, 100 7 of 20
Symmetry2016,8,100 7of20
Figure2.Humanactivityrecognitiontasks.
Task2.Dailyactivityrecognition.ActivitiesofDailyLiving(ADL)areastandardsetofhigher
levelactivitiesusedbyphysiciansandcaregiversasameasuretoestimatethephysicalwellbeing
ofelderlypatients,aswellastheirneedforassistedliving.Highlevelactivitiesareusuallycomposed
ofacollectionoflowlevelactivities,andarelongertermtypicallylastingfromseveralminutesto
fewhours.TheADLsincludesuchactivitiesasdressing,bathing,toileting,cleaningtheroom,
cooking,eating,andwashingdishes.Arelatedsetofcomplexactivities,calledInstrumentalADLs
(IADLs)consistofusingthephone,shopping,foodpreparation,housekeeping,doinglaundry,
transportation,takingmedications,andhandlingfinancesthataredependentonmemoryand
executivefunctioningofsubjects.Apartfromtheactivitiesmentionedsofar,furtheractivitiesthat
canberecognizedwithwearablesensorsincludesportsactivitiessuchascycling,rowing,running,
Nordicwalking.RecognizingthecompletesetofADLsandIADLsusingsensorsischallenging,since
someactivitiessuchashandlingfinancesareonlylooselydefinedandaredifficulttodetect.The
recognitionofhighlevelactivitiesisimportantforthedescriptionofanindividual’sdailyroutine.
Task3.Unusualeventrecognition.Eventrecognitionisdomainaimingtoprovideconvenience,
safetyandcomfortforelderlybydetectingpotentiallydangeroushumanactivityeventstoreduce
therisksfortheelderly.Thefollowingtypesofunusualeventshavebeenanalyzedintheliterature:
Suddeneventscanbedefinedasanabrupt,unintentionalandunexpectedchangeinthe
humanbodypositionthathappensduringashortperiodofobservation,hasnotbeen
observedbefore(i.e.,wasnotpresentinthetrainingdataset)andisunpredictable[33].In
thecaseofahomecareassistancesystem,asuddeneventrefersspecificallytoasuddenfall
byapatientorelderlypersonthatrequiresimmediateresponse.Detectionandtrackingof
thepositionandmovementofhumanbodyandpartsthereofareusefulfeaturesforearly
indicationofasuddenfallevent.
Abnormaleventsareactionsthatareperformedatanunusuallocationandatanunusual
time[34].Thistypeofeventscanbecharacterizedastemporalorspatialoutliers,which
deviatefromnormaleventsasrepresentedinthetrainingdatasetorlearnedmotionpatterns
andrequirealongerobservationforidentifyingit.
Task4.Biometricsubjectidentification.Thesensorreadingsregisteredduringdifferenthuman
activitiescanbeconsideredasakindofphysiologicalbiometriccharacteristicthatisfurtherusedto
identifythesubject[34].Theauthenticationprocessisanessentialrequirementsoastopermitthe
genuineuser(owner)toobtainaccesstothedevice.Behavioralbiometricsisrelatedtospecificactions
(e.g.,walking,running,etc.)andthewaythateachpersonexecutesthem[35].Biometricbased
authenticationverifiesthatthegenuineownerofthedeviceispresentintheimmediatevicinityof
thebiometricsensor.Anexampleofactivitybasedbiometriccharacteristicsisgait,whichisacomplex
spatiotemporalmotorcontrolbehaviorthatallowsbiometricrecognitionofindividualsatadistance[36].
Task5.Predictionofenergyexpenditures.Energyexpenditureestimationusingwearable
sensorsseekstofindtherelationshipbetweentheenergyexpenditureandthesensoroutputs[37].In
thistask,activityrecognitionisperformedasapartoftheenergyexpenditurepredictionprocess.
Figure 2. Human activity recognition tasks.
Task 2. Daily activity recognition. Activities of Daily Living (ADL) are a standard set of
higher-level activities used by physicians and care-givers as a measure to estimate the physical
well-being of elderly patients, as well as their need for assisted living. High-level activities are usually
composed of a collection of low-level activities, and are longer-term typically lasting from several
minutes to few hours. The ADLs include such activities as dressing, bathing, toileting, cleaning the
room, cooking, eating, and washing dishes. A related set of complex activities, called Instrumental
ADLs (IADLs) consist of using the phone, shopping, food preparation, housekeeping, doing laundry,
transportation, taking medications, and handling finances that are dependent on memory and
executive functioning of subjects. Apart from the activities mentioned so far, further activities that
can be recognized with wearable sensors include sports activities such as cycling, rowing, running,
Nordic walking. Recognizing the complete set of ADLs and IADLs using sensors is challenging,
since some activities such as handling finances are only loosely defined and are difficult to detect.
The recognition of high-level activities is important for the description of an individual’s daily routine.
Task 3. Unusual event recognition. Event recognition is domain aiming to provide convenience,
safety and comfort for elderly by detecting potentially dangerous human activity events to reduce the
risks for the elderly. The following types of unusual events have been analyzed in the literature:
Sudden events can be defined as an abrupt, unintentional and unexpected change in the human
body position that happens during a short period of observation, has not been observed before
(i.e., was not present in the training dataset) and is unpredictable [
33
]. In the case of a home
care assistance system, a sudden event refers specifically to a sudden fall by a patient or elderly
person that requires immediate response. Detection and tracking of the position and movement
of human body and parts thereof are useful features for early indication of a sudden fall event.
Abnormal events are actions that are performed at an unusual location and at an unusual time [
34
].
This type of events can be characterized as temporal or spatial outliers, which deviate from normal
events as represented in the training dataset or learned motion patterns and require a longer
observation for identifying it.
Task 4. Biometric subject identification. The sensor readings registered during different human
activities can be considered as a kind of physiological biometric characteristic that is further used
to identify the subject [
34
]. The authentication process is an essential requirement so as to permit
the genuine user (owner) to obtain access to the device. Behavioral biometrics is related to specific
actions (e.g., walking, running, etc.) and the way that each person executes them [
35
]. Biometric-based
authentication verifies that the genuine owner of the device is present in the immediate vicinity
of the biometric sensor. An example of activity-based biometric characteristics is gait, which is
a complex spatio-temporal motor-control behavior that allows biometric recognition of individuals at
a distance [36].
Symmetry 2016,8, 100 8 of 20
Task 5. Prediction of energy expenditures. Energy expenditure estimation using wearable
sensors seeks to find the relationship between the energy expenditure and the sensor outputs [
37
].
In this task, activity recognition is performed as a part of the energy expenditure prediction process.
Energy expenditure is estimated using MET (Metabolic Equivalent Task), which is defined as the ratio
of metabolic rate during a specific physical activity to a reference metabolic rate [
38
]. The measurement
is useful for real-time physical activity monitoring.
All tasks of the HAR domain require correct identification of human activities from sensor data,
which in turn requires that human activities must be properly categorized and described. In the
following subsection, taxonomy of human activities is analyzed.
2.3. Taxonomy of Human Activities
The development taxonomies of human activities are important as the gained knowledge can be
used in multi-layer (multi-step) classification systems such as described in [
39
]. There is a great deal of
taxonomies (explicitly or implicitly formulated) found in the literature. In fact almost every author of
a paper on this topic introduces and analyses his own set of human activities. The examples of such
taxonomies are given hereinafter.
Zhu and Sheng [
39
] classified human daily activities as stationary (lying, sitting, standing,
walking), motional and other activities. Motional activities include long term activities such as
walking ant transitional activities such as sit-to-stand.
The taxonomy of Incel et al. [
40
] covers locomotion (walking, running, sitting, standing, lying),
transportation (biking, riding, driving), exercise (bicycling, playing), health related activities (falls,
rehabilitation procedures), and daily activities (shopping, sleeping, working on a PC, eating, etc.).
Lara and Labrador [
41
] provided taxonomy of activities recognized by the state-of-the art human
activity recognition systems. Seven groups of activities are recognized: ambulation (walking, running,
sitting, standing, stairs up/down, elevator up/down), transportation (riding a bus, cycling, driving),
phone usage (messaging, calling), daily activities (eating, drinking, working with the PC, watching TV,
reading, doing hygiene, cleaning), exercises (rowing, etc.), military (crawling, kneeling, etc.) and upper
body activities (chewing, speaking, etc.).
Fleury et al. [
42
] presented the following classification: Sleeping, Preparing and having a meal,
Dressing/undressing, Resting (including watching TV, listening to the radio, reading a book,
sitting down on the sofa), Hygiene (tooth brushing and washing of the hands), Bowel movement,
and Communication (using a phone).
Capela et al. [
43
] identified seven different meta-classes (or levels) of activities differing by the
level of detail: Level 1: Mobile, and immobile (large movements and stairs labeled as mobile; sit,
stand, lie, and small movements labeled as immobile); Level 2: Sit, and stand (not including small
movements); Level 3: Sit, stand, and lie; Level 4: Large movements (going upstairs); Level 5: Ramp up,
ramp down, large movements, stairs up, and stairs down; Level 6: Small movements (e.g., sitting,
standing or lying); and Level 7: Transition states (transition between activities).
Atallah et al. [
44
] proposed the following classification of activities based on energy expenditure
of a person: very low-level activities (e.g., lying down); low-level activities (e.g., eating, drinking,
reading, and getting dressed); medium level activities (e.g., walking, vacuuming, and cleaning);
high level activities (e.g., running and cycling); and transitional (transfer) activities (e.g., sit-to-stand,
laying down-to-stand).
Several classifications of human activities include more complex activities related with the field of
sports: lying, Nordic walking, outdoor bicycling, rowing with the rowing machine, running, sitting,
soccer playing, standing, walking in [
45
], or for daily activities such as having lunch, breakfast or dinner,
going to work, shopping, sleeping, using a computer, and working [
46
]. The most comprehensive
classification of human activities is the Physical activity compendium [
35
], which has 21 categories of
activities as follows: Bicycling; Conditioning Exercise; Dancing, Fishing and Hunting; Home Activities;
Home Repair; Inactivity; Lawn and Garden; Miscellaneous; Music Playing; Occupation; Running;
Symmetry 2016,8, 100 9 of 20
Self Care; Sexual Activity; Sports; Transportation; Walking; Water Activities; Winter Activities;
Religious Activities; and Volunteer Activities.
The results of analysis of human activity domain are represented as a feature diagram [
47
] in
Figure 3. The feature diagram notation has been adopted from the field of product line engineering to
represent compactly a set of related entities in the domain of interest. The meaning of the elements
of the feature diagram is explained below [
48
]. Feature diagram is a connected graph, where nodes
represent features and edges represent relations among features. There are three types of features:
mandatory (boxes with the black circle above), optional and alternative. Mandatory features express
commonality of the concept, whereas optional and alternative features express variability. Features may
appear either as a solitary feature or in groups. If all mandatory features in the group are derived from
the same parent in the parent–child relationship, there is the “and” relationship among those features.
An optional feature may be included or not if its parent is included in the model. If only one feature
can be included from a set of child features, it is called an “alternative” feature.
Symmetry2016,8,100 9of20
ofthefeaturediagramisexplainedbelow[48].Featurediagramisaconnectedgraph,wherenodes
representfeaturesandedgesrepresentrelationsamongfeatures.Therearethreetypesoffeatures:
mandatory(boxeswiththeblackcircleabove),optionalandalternative.Mandatoryfeaturesexpress
commonalityoftheconcept,whereasoptionalandalternativefeaturesexpressvariability.Features
mayappeareitherasasolitaryfeatureoringroups.Ifallmandatoryfeaturesinthegrouparederived
fromthesameparentintheparent–childrelationship,thereisthe“andrelationshipamongthose
features.Anoptionalfeaturemaybeincludedornotifitsparentisincludedinthemodel.Ifonlyone
featurecanbeincludedfromasetofchildfeatures,itiscalledan“alternative”feature.
Figure3.Taxonomyofhumanactivities.
2.4.GeneralSchemeofGaitBasedUserIdentityVerification
Theproposedmethodfollowsatypicalarchitectureofbiometricsystems(seeFigure4)and
consistsoftwocontinuousprocesses:trainingandverification.Duringtraining,thepersonregisters
withthesystemsandthecaptureofhisgaitcharacteristicsusingbuiltinsensorsisperformed.Data
preprocessingisperformedtosegmentthedataintoframes,andremovenoiseandoutlierartifacts.
Featuresareextractedanddimensionalityreducedtoconstructauser’sgaitmodelthatcharacterizes
theconsideredpersonwhilediscardingirrelevantinformation.Itissubsequentlystored,forinstance,
onamemorycardoronacloud.Astheusergaitcharacteristicsmaydriftfromdaytodaydueto
his/herhealthconditionoremotionalstate,thesystemhastoberetrainedatleastonceaday.
Training
Sensing Pr eproc essing Feature
ex tracti on
Feature
reduction
Activity
detection
Owner sgait
model
Sensing Pr eproc essing Feature
ex tracti on
Feature
reduction
Activity
detection
Unknow nuser’s
gaitmodel
Compa ris on
Decision:
AcceptorReject
Verification
update
Figure4.Generalschemeofgaitbaseduseridentityverification.
Duringverification,thefirststepsarethesameasincaseoftraining:sensing,datapre
processing,featureextractionandfeaturedimensionalityreduction.Thereducedsetoffeaturevalues
is,first,activitydetectoridentifiesaspecificactiontheuserisundertaking(e.g.,walkingorrunning).
Thenitiscomparedwiththeusersgaitprofilestoredbythesystemandthedecisionistakenbased
onthedistanceoftherecordedprofilefromknownuser’sprofileeithertoconfirmuser’sidentityor
torejectitandinitiateprespecifiedsecuritymeasures.
Figure 3. Taxonomy of human activities.
2.4. General Scheme of Gait-Based User Identity Verification
The proposed method follows a typical architecture of biometric systems (see Figure 4) and
consists of two continuous processes: training and verification. During training, the person registers
with the systems and the capture of his gait characteristics using built-in sensors is performed.
Data pre-processing is performed to segment the data into frames, and remove noise and outlier
artifacts. Features are extracted and dimensionality reduced to construct a user’s gait model that
characterizes the considered person while discarding irrelevant information. It is subsequently stored,
for instance, on a memory card or on a cloud. As the user gait characteristics may drift from day to day
due to his/her health condition or emotional state, the system has to be retrained at least once a day.
Symmetry2016,8,100 9of20
ofthefeaturediagramisexplainedbelow[48].Featurediagramisaconnectedgraph,wherenodes
representfeaturesandedgesrepresentrelationsamongfeatures.Therearethreetypesoffeatures:
mandatory(boxeswiththeblackcircleabove),optionalandalternative.Mandatoryfeaturesexpress
commonalityoftheconcept,whereasoptionalandalternativefeaturesexpressvariability.Features
mayappeareitherasasolitaryfeatureoringroups.Ifallmandatoryfeaturesinthegrouparederived
fromthesameparentintheparent–childrelationship,thereisthe“andrelationshipamongthose
features.Anoptionalfeaturemaybeincludedornotifitsparentisincludedinthemodel.Ifonlyone
featurecanbeincludedfromasetofchildfeatures,itiscalledan“alternative”feature.
Figure3.Taxonomyofhumanactivities.
2.4.GeneralSchemeofGaitBasedUserIdentityVerification
Theproposedmethodfollowsatypicalarchitectureofbiometricsystems(seeFigure4)and
consistsoftwocontinuousprocesses:trainingandverification.Duringtraining,thepersonregisters
withthesystemsandthecaptureofhisgaitcharacteristicsusingbuiltinsensorsisperformed.Data
preprocessingisperformedtosegmentthedataintoframes,andremovenoiseandoutlierartifacts.
Featuresareextractedanddimensionalityreducedtoconstructauser’sgaitmodelthatcharacterizes
theconsideredpersonwhilediscardingirrelevantinformation.Itissubsequentlystored,forinstance,
onamemorycardoronacloud.Astheusergaitcharacteristicsmaydriftfromdaytodaydueto
his/herhealthconditionoremotionalstate,thesystemhastoberetrainedatleastonceaday.
Training
Sensing Pr eproc essing Feature
ex tracti on
Feature
reduction
Activity
detection
Owner’ sgait
model
Sensing Pr eproc essing Feature
ex tracti on
Feature
reduction
Activity
detection
Unknow nuser’s
gaitmodel
Compa ris on
Decision:
AcceptorReject
Verification
update
Figure4.Generalschemeofgaitbaseduseridentityverification.
Duringverification,thefirststepsarethesameasincaseoftraining:sensing,datapre
processing,featureextractionandfeaturedimensionalityreduction.Thereducedsetoffeaturevalues
is,first,activitydetectoridentifiesaspecificactiontheuserisundertaking(e.g.,walkingorrunning).
Thenitiscomparedwiththeusersgaitprofilestoredbythesystemandthedecisionistakenbased
onthedistanceoftherecordedprofilefromknownuser’sprofileeithertoconfirmuser’sidentityor
torejectitandinitiateprespecifiedsecuritymeasures.
Figure 4. General scheme of gait-based user identity verification.
Symmetry 2016,8, 100 10 of 20
During verification, the first steps are the same as in case of training: sensing, data pre-processing,
feature extraction and feature dimensionality reduction. The reduced set of feature values is, first,
activity detector identifies a specific action the user is undertaking (e.g., walking or running). Then it
is compared with the users gait profile stored by the system and the decision is taken based on the
distance of the recorded profile from known user’s profile either to confirm user’s identity or to reject
it and initiate pre-specified security measures.
2.5. Description of the Method
The proposed method for subject identification based on gait characteristics is based on feature
dimensionality reduction using Random Projections [
49
] and classification using probability density
function (PDF) estimate as a decision function.
When performing random projection, the original d-dimensional data is projected into
ak-dimensional (k<< d) subspace using a random k
×
dmatrix R. The projection of the data onto a
lower k-dimensional subspace is
XRP
k×N=Rk×dXd×N
, where X
d×N
is the original set of N d-dimensional
observations. In the derived projection, the distances between points are approximately preserved,
if points in a vector space are projected onto a randomly selected subspace of suitably high dimension
(Johnson–Lindenstrauss lemma [50]). The random matrix Rcan be selected as follows:
rij =
+1, probability=1/6
0, probability=2/3
1, probability=1/6
(1)
Given the low dimensionality of the target space, we can treat the projection of low-dimensional
observations onto each dimension as a set of random variables for which the probability density
function (PDF) can be estimated using the Parzen window method [51].
If x
1
,x
2
,
. . .
,x
N
is a sample of a random variable, then the kernel density approximation of its
probability density function is:
ˆ
fh(x)=1
Nh Kxxi
h(2)
where Kis some kernel function and his the bandwidth (smoothing parameter). Kis taken to be
a standard Gaussian function with mean zero and variance 1 of the examined data features:
K(x)=1
2π
e1
2x2(3)
For a two dimensional case, the bivariate probability density function is calculated as a product of
univariate probability functions as follows:
ˆ
f(x,y)=ˆ
f(x)·ˆ
f(y)(4)
where xand yare data in each dimension, respectively.
However, each random projection produces a different mapping of the original data points that
reveals only a part of the data manifold in the higher-dimensional space. In case of the binary
classification problem, we are interested in a mapping that separates data points belonging to
two different classes best. As a criterion for estimating the mapping, we use the Jaccard distance metric
between two probability density estimates of data points representing each class. The Jaccard distance
is easily adaptable to multidimensional spaces where compared points show relations to different
subsets. The Jaccard distance, which measures dissimilarity between sample sets, is complementary to
the Jaccard coefficient and is obtained by subtracting the Jaccard coefficient from 1, or, equivalently,
Symmetry 2016,8, 100 11 of 20
by dividing the difference of the sizes of the union and the intersection of two sets by the size of
the union:
dJ(A,B)=1J(A,B)=|AB||AB|
|AB|(5)
For classification the best random projection with smallest overlapping area is selected. In the
case of multiple classes, the method works as a one-class classifier: recognizing instances of
a positive class, while all instances of other classes are recognized as outliers of the positive class.
The acception/rejection is demonstrated in Figure 5.
Symmetry2016,8,100 11of20
Figure5.RecognitionofvaliduserbasedonPDFestimateoffeaturevalue.
3.Results
3.1.Dataset
Toevaluatetheperformanceoftheproposedapproachforthesmartphonedata,weusedthe
USCHumanActivityDataset[52]recordedusingtheMotionNodedevice(samplingrate100Hz,
triaxialaccelerometerrange:6g,triaxialgyroscoperange:500dps).Thedatasetconsistsofrecords
recordedwith14subjects(7male,7female;Age:21–49)of12activities,fivetrialseach.
Thefollowingactivitieshavebeenrecorded:WalkingForward(WF),WalkingLeft(WL),
WalkingRight(WR),WalkingUpstairs(WU),WalkingDownstairs(WD),RunningForward(RF),
JumpingUp(JU),Sitting(Si),Standing(St),Sleeping(Sl),ElevatorUp(EU),andElevatorDown(ED).
3.2.Features
Sensorreadingsconsistofsixreadings:accelerationalongx,y‐andzaxes,andgyroscopealong
x,y‐andzaxes.Basedontheextensiveanalysisofliteratureandfeaturesusedbyotherauthors
(especiallybyCapelaetal.[43],Mathieetal.[53],ZhangandSawchuk[52]),wehaveextracted99
featuresofdata,whichhavebeendetailedin[54].Thefeaturerankingwasperformedusing
KullbackLeiblerdivergenceasclassseparabilitycriteriononthehumanactivitydatafromtheUSC
HADdatasetasdescribedin[54].
WesummarizethebestrankedfeaturesforsubjectidentificationinTable2.
Figure 5. Recognition of valid user based on PDF estimate of feature value.
3. Results
3.1. Dataset
To evaluate the performance of the proposed approach for the smartphone data, we used the
USC Human Activity Dataset [
52
] recorded using the MotionNode device (sampling rate 100 Hz,
triaxial accelerometer range:
±
6 g, triaxial gyroscope range:
±
500 dps). The dataset consists of records
recorded with 14 subjects (7 male, 7 female; Age: 21–49) of 12 activities, five trials each.
The following activities have been recorded: Walking Forward (WF), Walking Left (WL),
Walking Right (WR), Walking Upstairs (WU), Walking Downstairs (WD), Running Forward (RF),
Jumping Up (JU), Sitting (Si), Standing (St), Sleeping (Sl), Elevator Up (EU), and Elevator Down (ED).
3.2. Features
Sensor readings consist of six readings: acceleration along x-, y- and z-axes, and gyroscope
along x-, y- and z-axes. Based on the extensive analysis of literature and features used by other
authors (especially by Capela et al. [
43
], Mathie et al. [
53
], Zhang and Sawchuk [
52
]), we have
extracted 99 features of data, which have been detailed in [
54
]. The feature ranking was performed
using Kullback–Leibler divergence as class separability criterion on the human activity data from the
USC-HAD dataset as described in [54].
Symmetry 2016,8, 100 12 of 20
We summarize the best ranked features for subject identification in Table 2.
Table 2. Best ranked features for subject identification.
Rank Feature Description
1Moving variance of 100 samples of
gyroscope data along z-axis var =1
N(N1) NN
i=1x2
iN
i=1xi2!, here x=gz
2Moving variance of 100 samples of
acceleration intensity data
var =1
N(N1) NN
i=1x2
iN
i=1xi2!,
here x=qa2
x+a2
y+a2
z
3
First eigenvalue of moving covariance
of difference between acceleration and
gyroscope data
Eag =eig1cov(axgx,aygy,azgz)
4Moving energy of gyroscope data
along z-axis ME =1
N
N
i=1x2
i, here x=gz
5
Moving energy of difference between
acceleration and gyroscope data
along z-axis
MEag =1
N
N
i=1(xiyi)2, here x=az,y=gz
6Moving variance of 100 samples of
acceleration data along x-axis var =1
N(N1) NN
i=1x2
iN
i=1xi2!, here x=ax
7First eigenvalue of moving covariance
between acceleration data Ea=eig1cov ax(1 : N),ay(1 : N),az(1 : N)
8First eigenvalue of moving covariance
between gyroscope data Eg=eig1cov gx(1 : N),gy(1 : N),gz(1 : N)
9Moving energy of orientation vector of
acceleration data MEA =1
N
N
i=1ϕ2
i, here ϕ=arccos(ax·ay)
|ax|·|ay|
10 Movement intensity of gyroscope data MIg=qg2
x+g2
y+g2
z
3.3. Evaluation Metrics
To evaluate the effectiveness of the proposed method for user identity verification, we use
four metrics widely used in the information security community:
False Accept Rate (FAR) is the probability (or a portion of recognition attempts) that the identity
verification system incorrectly identifies the hijacker (impostor) as the genuine user. For a user,
the FAR is a measure of system security.
False Reject Rate (FRR) is the probability (or a portion of recognition attempts) that the identity
verification system incorrectly rejects the genuine user. For a user, the FRR measures the user
inconvenience level.
Equal Error Rate (EER) is the rate at which both FAR and FRR are equal. The lower the value of
ERR is, the higher is the accuracy of the biometric system.
Accuracy (or a true positive rate, TPR) is a proportion of all recognition attempts where subjects
were identified correctly.
3.4. Results
We evaluate the effectiveness of the proposed method for user identity verification based on
user gait parameters using the FRR, FAR and Accuracy metrics. For subject identification, the data
from all physical actions is used to train the classifier. Here we consider 1-vs.-all subject classification.
Therefore, the data of one subject are defined as a positive class, and the data of all other subjects are
defined as a negative class. Five-fold cross-validation was performed using 80% of data for training
and 20% of data for testing. The results of 1-vs.-all subject identification using all activities for training
and testing are presented in Figures 6and 7. Grand mean FAR for all users is 0.0869, and FRR is 0.0763,
with accuracy of 0.9171.
Symmetry 2016,8, 100 13 of 20
Symmetry2016,8,100 13of20
andtestingarepresentedinFigures6and7.GrandmeanFARforallusersis0.0869,andFRRis
0.0763,withaccuracyof0.9171.
(a)
(b)
Figure6.(a)Meanand(b)standarddeviationofFRR(crosssubjectidentification).
(a)
(b)
Figure7.(a)Meanand(b)standarddeviationofFAR(crosssubjectidentification).
Ifanactivityofasubjecthasbeenestablished,separateclassifiersforeachactivitycanbeused
forsubjectidentification.Then,datafromoneactiononlyareusedfortrainingaswellasfortesting.
Inthiscase,fivefoldcrossvalidationwasalsoperformed,using80%ofdatafortraining,and20%of
datafortesting,andtheresultsarepresentedinFigure8.Thegrandmeanaccuracyis0.7202,which
isnotaveryhighresult(worstresultsprovidedbyElevatorUpandElevatorDownactions).
However,ifweconsideronlytopthreewalkingrelatedactivities(WalkingForward,WalkingLeft
orWalkingRight),themeanaccuracyis0.9444.
Figure8.Meanaccuracyof1vs.allsubjectidentificationforspecificactivities(1,WF;2,WL;3,WR;
4,WU;5,WD;6,FR;7,JU;8,Si;9,St;10,Sl;11,EU;and12,ED).
Figure 6. (a) Mean and (b) standard deviation of FRR (cross-subject identification).
Symmetry2016,8,100 13of20
andtestingarepresentedinFigures6and7.GrandmeanFARforallusersis0.0869,andFRRis
0.0763,withaccuracyof0.9171.
(a)
(b)
Figure6.(a)Meanand(b)standarddeviationofFRR(crosssubjectidentification).
(a)
(b)
Figure7.(a)Meanand(b)standarddeviationofFAR(crosssubjectidentification).
Ifanactivityofasubjecthasbeenestablished,separateclassifiersforeachactivitycanbeused
forsubjectidentification.Then,datafromoneactiononlyareusedfortrainingaswellasfortesting.
Inthiscase,fivefoldcrossvalidationwasalsoperformed,using80%ofdatafortraining,and20%of
datafortesting,andtheresultsarepresentedinFigure8.Thegrandmeanaccuracyis0.7202,which
isnotaveryhighresult(worstresultsprovidedbyElevatorUpandElevatorDownactions).
However,ifweconsideronlytopthreewalkingrelatedactivities(WalkingForward,WalkingLeft
orWalkingRight),themeanaccuracyis0.9444.
Figure8.Meanaccuracyof1vs.allsubjectidentificationforspecificactivities(1,WF;2,WL;3,WR;
4,WU;5,WD;6,FR;7,JU;8,Si;9,St;10,Sl;11,EU;and12,ED).
Figure 7. (a) Mean and (b) standard deviation of FAR (cross-subject identification).
If an activity of a subject has been established, separate classifiers for each activity can be used
for subject identification. Then, data from one action only are used for training as well as for testing.
In this case, five-fold cross-validation was also performed, using 80% of data for training, and 20% of
data for testing, and the results are presented in Figure 8. The grand mean accuracy is 0.7202, which is
not a very high result (worst results provided by Elevator Up and Elevator Down actions). However,
if we consider only top three walking-related activities (Walking Forward, Walking Left or Walking
Right), the mean accuracy is 0.9444.
Figure 8.
Mean accuracy of 1-vs.-all subject identification for specific activities (1, WF; 2, WL; 3, WR;
4, WU; 5, WD; 6, FR; 7, JU; 8, Si; 9, St; 10, Sl; 11, EU; and 12, ED).
Symmetry 2016,8, 100 14 of 20
Finally, we can simplify the classification problem to binary classification (i.e., recognize only
one subject against all other subjects). Then, the data from a pair of subjects performing a specific
activity are used for classification and training. Separate classifiers are built for each pair of subjects,
the results are evaluated using five-fold cross-validation, and the results are averaged. The results
are presented in Figure 9. Note that the grand mean accuracy has increased to 0.9475, while for top
three walking-related activities (Walking Forward, Walking Left or Walking Right), the grand mean
accuracy is 0.9916.
Symmetry2016,8,100 14of20
Finally,wecansimplifytheclassificationproblemtobinaryclassification(i.e.,recognizeonly
onesubjectagainstallothersubjects).Then,thedatafromapairofsubjectsperformingaspecific
activityareusedforclassificationandtraining.Separateclassifiersarebuiltforeachpairofsubjects,
theresultsareevaluatedusingfivefoldcrossvalidation,andtheresultsareaveraged.Theresults
arepresentedinFigure9.Notethatthegrandmeanaccuracyhasincreasedto0.9475,whilefortop
threewalkingrelatedactivities(WalkingForward,WalkingLeftorWalkingRight),thegrandmean
accuracyis0.9916.
Figure9.Meanaccuracyof1vs.allsubjectidentificationforspecificactivities(1,WF;2,WL;3,WR;
4,WU;5,WD;6,FR;7,JU;8,Si;9,St;10,Sl;11,EU;and12,ED).
InFigures10–12,wepresentthevaluesofFAR,FRRandaccuracymetricsofsubjectvs.subject
identificationusingdatafromalltypesofphysicalactivities.Thegrandmeanachievedis:FARis0.056
0.027,FRRis0.0530.028,andaccuracyis0.9450.028(meanstandarddeviationvaluesgiven).
Figure10.FARofsubjectvs.subjectidentificationusingallactivities.
Figure 9.
Mean accuracy of 1-vs.-all subject identification for specific activities (1, WF; 2, WL; 3, WR;
4, WU; 5, WD; 6, FR; 7, JU; 8, Si; 9, St; 10, Sl; 11, EU; and 12, ED).
In Figures 1012, we present the values of FAR, FRR and accuracy metrics of subject-vs.-subject
identification using data from all types of physical activities. The grand mean achieved is: FAR is
0.056 ±0.027
, FRR is 0.053
±
0.028, and accuracy is 0.945
±
0.028 (mean
±
standard deviation
values given).
Symmetry2016,8,100 14of20
Finally,wecansimplifytheclassificationproblemtobinaryclassification(i.e.,recognizeonly
onesubjectagainstallothersubjects).Then,thedatafromapairofsubjectsperformingaspecific
activityareusedforclassificationandtraining.Separateclassifiersarebuiltforeachpairofsubjects,
theresultsareevaluatedusingfivefoldcrossvalidation,andtheresultsareaveraged.Theresults
arepresentedinFigure9.Notethatthegrandmeanaccuracyhasincreasedto0.9475,whilefortop
threewalkingrelatedactivities(WalkingForward,WalkingLeftorWalkingRight),thegrandmean
accuracyis0.9916.
Figure9.Meanaccuracyof1vs.allsubjectidentificationforspecificactivities(1,WF;2,WL;3,WR;
4,WU;5,WD;6,FR;7,JU;8,Si;9,St;10,Sl;11,EU;and12,ED).
InFigures10–12,wepresentthevaluesofFAR,FRRandaccuracymetricsofsubjectvs.subject
identificationusingdatafromalltypesofphysicalactivities.Thegrandmeanachievedis:FARis0.056
0.027,FRRis0.0530.028,andaccuracyis0.9450.028(meanstandarddeviationvaluesgiven).
Figure10.FARofsubjectvs.subjectidentificationusingallactivities.
Figure 10. FAR of subject-vs.-subject identification using all activities.
Symmetry 2016,8, 100 15 of 20
Symmetry2016,8,100 15of20
Figure11.FRRofsubjectvs.subjectidentificationusingallactivities.
Figure12.Accuracyofsubjectvs.subjectidentificationusingallactivities.
Finally,wehaveevaluatedtheERRforeachsubjectastheintersectionpointofprobability
densityfunctions(PDFs)ofFARandFRR.WehavemodeledthePDFsofFARandFRRusingthe
Weibulldistribution.Weibulldistributionisoftenusedinreliabilityanalysis.Withanappropriate
choiceofparameters,aWeibulldistributioncantakeonthecharacteristicsofmanyothertypesof
distributions,includingGaussian.TheresultsarepresentedinFigure13.ThegrandmeanERRforall
subjectsis0.0570.030.Inaddition,notethesubstantialvariabilitybetweensubjects(e.g.,between
Figure 11. FRR of subject-vs.-subject identification using all activities.
Symmetry2016,8,100 15of20
Figure11.FRRofsubjectvs.subjectidentificationusingallactivities.
Figure12.Accuracyofsubjectvs.subjectidentificationusingallactivities.
Finally,wehaveevaluatedtheERRforeachsubjectastheintersectionpointofprobability
densityfunctions(PDFs)ofFARandFRR.WehavemodeledthePDFsofFARandFRRusingthe
Weibulldistribution.Weibulldistributionisoftenusedinreliabilityanalysis.Withanappropriate
choiceofparameters,aWeibulldistributioncantakeonthecharacteristicsofmanyothertypesof
distributions,includingGaussian.TheresultsarepresentedinFigure13.ThegrandmeanERRforall
subjectsis0.0570.030.Inaddition,notethesubstantialvariabilitybetweensubjects(e.g.,between
Figure 12. Accuracy of subject-vs.-subject identification using all activities.
Finally, we have evaluated the ERR for each subject as the intersection point of probability density
functions (PDFs) of FAR and FRR. We have modeled the PDFs of FAR and FRR using the Weibull
distribution. Weibull distribution is often used in reliability analysis. With an appropriate choice of
parameters, a Weibull distribution can take on the characteristics of many other types of distributions,
including Gaussian. The results are presented in Figure 13. The grand mean ERR for all subjects is
0.057
±
0.030. In addition, note the substantial variability between subjects (e.g., between S1 and S13).
This means that some subjects have more distinctive characteristics of gait than other subjects.
Symmetry 2016,8, 100 16 of 20
Symmetry2016,8,100 16of20
S1andS13).Thismeansthatsomesubjectshavemoredistinctivecharacteristicsofgaitthan
othersubjects.
Figure13.ERRof1vs.allsubjectidentificationfordifferentsubjects(S1–S14).
4.Evaluation
Humangaitisacomplexspatiotemporalbiometriccharacteristic.Gaitcannotbeconsideredas
adistinctivebiometricsinceitmaychangeoveraperiodoftimebecauseofagerelatedhealth
problems,injuries,changesinclothing(shoes,etc.),terrainofwalking,orchangesinbehavioror
mood.However,itmaybeusedforcertainlowsecurityapplicationsonthesmartphoneortogether
withotherbiometrics(e.g.,speechrecognition)asanadditionallayerofsecurity.
Tobepracticalineverydayscenarios,agaitbaseduseridentityverificationsystemmustbeable
tocopewithchangingcontextofitsuse(differentplacementandorientationofthephone).The
positionofthephonerelativetotheuser’sbodyisakeyparameterforactivityrecognition,whereas
activityrecognitionisakeytosuccessfuluseridentityverification.Thegaitbaseduseridentity
verificationmethodsmustbecomputationallysimpletobeabletoruninthephoneitselfratherthan
ontheremotecloudandmustensurecontinuousverificationofuser’sgaitparameters.
Thechallengesthesemethodsfaceincludenoisydataduetocontaminationorinterference,high
intraclassvariabilityofuser’sgaitparameters,limiteddiscriminativecapabilitytorecognizevalid
userfromotherusers(someusersaremoreeasilyrecognizablethanothers),collectingconsistentand
reliabledatasetfortestingthealgorithmassomeactivitiesmayhavebeenmarkedbyuserswith
wronglabels,dealingwithtransitionaryandoverlappingactivitieswhichhavefuzzyborders(e.g.,
betweenfastwalkingandrunning),dealingwithincorrectorunanticipatedplacingofthephone
relativetotheactivitybeingmeasured,andabilitytowithstandspoofattackswhenanimpostortries
tomimicthegaitofthevaliduser.
Finally,theevaluationoftheproposedmethodusingthecriteriaformulatedbyJainetal.[4]and
Waymanetal.[2]ispresentedinTable3.
Figure 13. ERR of 1-vs.-all subject identification for different subjects (S1–S14).
4. Evaluation
Human gait is a complex spatiotemporal biometric characteristic. Gait cannot be considered
as a distinctive biometric since it may change over a period of time because of age-related health
problems, injuries, changes in clothing (shoes, etc.), terrain of walking, or changes in behavior or mood.
However, it may be used for certain low security applications on the smartphone or together with
other biometrics (e.g., speech recognition) as an additional layer of security.
To be practical in everyday scenarios, a gait-based user identity verification system must be able to
cope with changing context of its use (different placement and orientation of the phone). The position
of the phone relative to the user’s body is a key parameter for activity recognition, whereas activity
recognition is a key to successful user identity verification. The gait-based user identity verification
methods must be computationally simple to be able to run in the phone itself rather than on the remote
cloud and must ensure continuous verification of user’s gait parameters.
The challenges these methods face include noisy data due to contamination or interference,
high intra-class variability of user’s gait parameters, limited discriminative capability to recognize
valid user from other users (some users are more easily recognizable than others), collecting consistent
and reliable dataset for testing the algorithm as some activities may have been marked by users
with wrong labels, dealing with transitionary and overlapping activities which have fuzzy borders
(e.g., between fast walking and running), dealing with incorrect or unanticipated placing of the phone
relative to the activity being measured, and ability to withstand spoof attacks when an impostor tries
to mimic the gait of the valid user.
Finally, the evaluation of the proposed method using the criteria formulated by Jain et al. [
4
] and
Wayman et al. [2] is presented in Table 3.
Symmetry 2016,8, 100 17 of 20
Table 3. Evaluation using biometric system criteria.
According to Jain et al. [4] According to Wayman et al. [2]
Distinctiveness: gait parameters are unique and difficult to mimic
Permanence: human gait characteristics are sufficiently stable over
a short (day-to-day) period of time
Universality: all people able to walk have fairly distinctive
gait characteristics
Collectability: the gait characteristics can be measured
quantitatively using sensors commonly available on most
of smartphones
Performance: the advantage of Random Projections is simplicity,
scalability, robustness to noise and low computational complexity:
constructing random matrix Rand projecting the d×Ndata
matrix into kdimensions is of order O(dkN). User matching is
performed using a computationally low expensive Jaccard distance
Acceptability: continuous gait monitoring is not as intrusive as
other methods of authentication (e.g., face recognition)
Circumvention: gait is difficult to mimic by an impostor
Cooperative/non-cooperative: does not apply. The method
does not require explicit cooperation of user
Overt/covert: the method is covert as the user may not be
aware that one of his biometrics is being measured
Habituation: habituated. The gait characteristic should be
measured continuously both to ensure security monitoring
and keep the user’s gait model up-to-date
Attended/non-attended: non-attended. The method does not
require direct supervision of the user
Standard/nonstandard: nonstandard. Collecting enough data
for user verification may require a longer period of
walking, which may be ensured only in a nonstandard
(outdoor) environment
5. Conclusions
We have presented a method for gait-based user identity verification using smartphone sensor
data. Gait-based user identity verification relies on the biometric specificity of human activity traits.
By continuously, implicitly and unobtrusively identifying the phone’s owner using accelerometer and
gyroscope sensing, gait analysis has a great potential to improve user identify verification on the go.
The proposed method is based on selected statistical and heuristic gait features and application of
Random Projections method for reduction of feature dimensionality. Estimate of probability density
function (PDF) of low-dimensional feature vector is used to match the user in question with the PDF
of the valid user.
The proposed method was tested with off-line data from the USC-HAD dataset. The results
for subject recognition are at an acceptable level: the achieved grand mean Equal Error Rate (ERR)
for all subjects is 5.7% (std = 3.0%). As gait-based verification technologies are being considered to
be deployed as an additional (optional) security layer in smartphones, our findings represent a step
towards improving the performance and usability of these systems.
Future work will include the implementation of the gait-based user identity verification system
on the mobile (Android) platform and performing experiments in real-time with subjects.
Acknowledgments:
The authors would like to acknowledge the contribution of the COST Action IC1303
AAPELE—Architectures, Algorithms and Platforms for Enhanced Living Environments.
Author Contributions:
R.D. conceived, designed and performed the experiments; R.M. and A.V. analyzed the
data; A.V. contributed materials; R.D. wrote the paper; M.W. revised the mathematical description of the method,
and R.M., M.W. and A.V. provided suggestions to improve the paper.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
Phone Theft in America. Available online: https://transition.fcc.gov/cgb/events/Lookout-phone-theft-in-
america.pdf/ (accessed on 17 June 2016).
2.
Clarke, N.L.; Furnell, S.M. Authentication of users on mobile telephones—A survey of attitudes and practices.
Comput. Secur. 2005,24, 519–527. [CrossRef]
3.
Breitinger, F.; Nickel, C. User survey on phone security and usage. In Proceedings of the Special Interest
Group on Biometrics and Electronic Signatures, Darmstadt, Germany, 9–10 September 2010; pp. 139–144.
4.
Wayman, J.; Jain, A.; Maltoni, D.; Maio, D. An introduction to biometric authentication systems. In Biometric
Systems; Springer: London, UK, 2005; pp. 1–20.
5.
Sprager, S.; Juric, M.B. Inertial sensor-based gait recognition: A review. Sensors
2015
,15, 22089–22127.
[CrossRef] [PubMed]
6.
Jain, A.K.; Flynn, P.J.; Ross, A.A. (Eds.) Handbook of Biometrics; Springer: Berlin/Heidelberg, Germany, 2008.
Symmetry 2016,8, 100 18 of 20
7.
Kwapisz, J.R.; Weiss, G.M.; Moore, S.A. Cell phone-based biometric identification. In Proceedings of the
Fourth IEEE International Conference on Biometrics: Theory Applications and Systems (BTAS), Washington,
DC, USA, 27–29 September 2010; pp. 1–7.
8.
Nickel, C.; Zhou, X.; Busch, C. Template Protection for Biometric Gait Data. In Proceedings of the
Special Interest Group on Biometrics and Electronic Signatures, Darmstadt, Germany,
9–10 September 2010
;
pp. 73–83.
9.
Wang, L.; Tan, T.; Hu, W.; Ning, H. Automatic gait recognition based on statistical shape analysis. IEEE Trans.
Image Process. 2003,12, 1120–1131. [CrossRef] [PubMed]
10.
Ailisto, H.; Lindholm, M.; Mantyjarvi, J.; Vildjounaite, E.; Makela, S.M. Identifying people from gait pattern
with accelerometers. In Proceedings of the Biometric Technology for Human Identification II, Orlando, FL,
USA, 28 March 2005; pp. 7–14.
11.
Thang, H.M.; Viet, V.Q.; Thuc, N.D.; Choi, D. Gait identification using accelerometer on mobile phone.
In Proceedings of the International Conference on Control, Automation and Information Sciences (ICCAIS),
Ho Chi Minh, Vietnam, 26–29 November 2012; pp. 344–348.
12.
Liu, R.; Zhou, J.Z.; Liu, M.; Hou, X.F. A wearable acceleration sensor system for gait recognition.
In Proceedings of the 2nd IEEE Conference on Industrial Electronics and Applications (ICIEA), Harbin,
China, 23–25 May 2007; pp. 2654–2659.
13.
Pan, G.; Zhang, Y.; Wu, Z. Accelerometer-based gait recognition via voting by signature points. Electron. Lett.
2009,45, 1116–1118. [CrossRef]
14.
Sprager, S. A Cumulant-based method for gait identification using accelerometer data with principal
component analysis and support vector machine. In Proceedings of the 2nd WSEAS International
Conference on Sensors, Signals, Visualization, Imaging, Simulation and Materials, Baltimore, MD, USA,
7–9 November 2009; pp. 94–99.
15.
Bachlin, M.; Schumm, J.; Roggen, D.; Toster, G. Quantifying gait similarity: User authentication and
real-world challenge. In Advances in Biometrics; Tistarelli, M., Nixon, M., Eds.; Springer: Berlin/Heidelberg,
Germany, 2009; Volume 5558, pp. 1040–1049.
16.
Trivino, G.; Alvarez-Alvarez, A.; Bailador, G. Application of the computational theory of perceptions to
human gait pattern recognition. Pattern Recogn. 2010,43, 2572–2581. [CrossRef]
17.
Frank, J.; Mannor, S.; Precup, D. Activity and gait recognition with time-delay embeddings. In Proceedings
of the 24th AAAI Conference on Artificial Intelligence, Atlanta, GA, USA, 11–15 July 2010; pp. 1581–1586.
18.
Nickel, C.; Brandt, H.; Busch, C. Classification of Acceleration Data for Biometric Gait Recognition on Mobile
Devices. In Proceedings of the Special Interest Group on Biometrics and Electronic Signatures, Darmstadt,
Germany, 8–9 September 2011; Volume 191, pp. 57–66.
19.
Kobayashi, T.; Hasida, K.; Otsu, N. Rotation invariant feature extraction from 3-D acceleration signals.
In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),
Prague, Czech Republic, 22–27 May 2011; pp. 3684–3687.
20.
Juefei-Xu, F.; Bhagavatula, C.; Jaech, A.; Prasad, U.; Savvides, M. Gait-id on the move: Pace independent
human identification using cell phone accelerometer dynamics. In Proceedings of the IEEE Fifth International
Conference on Biometrics: Theory, Applications and Systems (BTAS), Arlington, VA, USA,
23–27 September 2012
;
pp. 8–15.
21.
Hoang, T.; Nguyen, T.D.; Luong, C.; Do, S.; Choi, D. Adaptive Cross-Device Gait Recognition Using a Mobile
Accelerometer. J. Inf. Proc. Syst. 2013,9, 333. [CrossRef]
22.
Derawi, M.; Bours, P. Gait and activity recognition using commercial phones. Comput. Secur.
2013
,39,
137–144. [CrossRef]
23.
Wolff, M. Behavioral biometric identification on mobile devices. In Foundations of Augmented Cognition;
Springer: Berlin/Heidelberg, Germany, 2013; pp. 783–791.
24.
Lu, H.; Huang, J.; Saha, T.; Nachman, L. Unobtrusive gait verification for mobile phones. In Proceedings of
the 2014 ACM International Symposium on Wearable Computers, Seattle, WA, USA, 13–17 September 2014;
pp. 91–98.
25.
Lin, B.-S.; Liu, Y.-T.; Yu, C.; Jan, G.E.; Hsiao, B.-T. Gait Recognition and Walking Exercise Intensity Estimation.
Int. J. Environ. Res. Public Health 2014,11, 3822–3844. [CrossRef] [PubMed]
Symmetry 2016,8, 100 19 of 20
26.
Johnston, A.H.; Weiss, G.M. Smartwatch-based biometric gait recognition. In Proceedings of the IEEE
7th International Conference on Biometrics Theory, Applications and Systems, Arlington, VA, USA,
8–11 September 2015; pp. 1–6.
27.
Attal, F.; Mohammed, S.; Dedabrishvili, M.; Chamroukhi, F.; Oukhellou, L.; Amirat, Y. Physical human
activity recognition using wearable sensors. Sensors 2015,15, 31314–31338. [CrossRef] [PubMed]
28.
Abidine, M.B.; Fergani, B. News Schemes for Activity Recognition Systems Using PCA-WSVM, ICA-WSVM,
and LDA-WSVM. Information 2015,6, 505–521. [CrossRef]
29.
Pei, L.; Guinness, R.; Chen, R.; Liu, J.; Kuusniemi, H.; Chen, Y.; Chen, L.; Kaistinen, J. Human behavior
cognition using smartphone sensors. Sensors 2013,13, 1402–1424. [CrossRef] [PubMed]
30.
Huynh, D.T.G. Human Activity Recognition with Wearable Sensors. Ph.D. Thesis, Technische University of
Darmstadt, Darmstadt, Germany, August 2008.
31.
Bobick, A. Movement, activity, and action: The role of knowledge in the perception of motion. Philos. Trans.
R. Soc. Lond. B 1997,352, 1257–1265. [CrossRef] [PubMed]
32.
Govindaraju, V. A Generative Framework to Investigate the Underlying Patterns in Human Activities.
In Proceedings of the IEEE International Conference on Computer Vision Workshops, Barcelona, Spain,
6–13 November 2011; pp. 1472–1479.
33.
Suriani, N.S.; Hussain, A.; Zulkifley, M.A. Sudden event recognition: A survey. Sensors
2013
,13, 9966–9998.
[CrossRef] [PubMed]
34.
Jain, A.K.; Ross, A.; Prabhakar, S. An introduction to biometric recognition. IEEE Trans. Circuits Syst.
Video Technol. 2004,14, 4–20. [CrossRef]
35.
Yampolskiy, R.V.; Govindaraju, V. Behavioural biometrics: A survey and classification. Int. J. Biom.
2008
,1,
81–113. [CrossRef]
36.
Drosou, A.; Ioannidis, D.; Moustakas, K.; Tzovaras, D. Spatiotemporal analysis of human activities for
biometric authentication. Comput. Vis. Image Underst. 2012,116, 411–421. [CrossRef]
37.
Lustrek, M.; Cvetkovic, B.; Kozina, S. Energy expenditure estimation with wearable accelerometers.
In Proceedings of the 2012 IEEE International Symposium on Circuits and Systems ISCAS, Seoul, Korea,
20–23 May 2012; pp. 5–8.
38.
Ainsworth, B.E.; Haskell, W.L.; Herrmann, S.D.; Meckes, N.; Bassett, D.R., Jr.; Tudor-Locke, C.; Greer, J.L.;
Vezina, J.; Whitt-Glover, M.C.; Leon, A.S. Compendium of physical activities: A second update of codes and
MET values. Med. Sci. Sports Exerc. 2011,43, 1575–1581. [CrossRef] [PubMed]
39.
Zhu, C.; Sheng, W. Motion- and location-based online human daily activity recognition.
Pervasive Mob. Comput. 2011,7, 256–269. [CrossRef]
40.
Incel, O.D.; Kose, M.; Ersoy, C. A review and taxonomy of activity recognition on mobile phones.
BioNanoScience 2013,3, 145–171. [CrossRef]
41.
Lara, O.D.; Labrador, M.A. A survey on human activity recognition using wearable sensors. IEEE Commun.
Surv. Tutor. 2013,15, 1192–1209. [CrossRef]
42.
Fleury, A.; Noury, N.; Vacher, M. Improving supervised classification of activities of daily living using prior
knowledge. Int. J. E-Health Med. Commun. 2011,2, 17–34. [CrossRef]
43.
Capela, N.A.; Lemaire, E.D.; Baddour, N. Feature selection for wearable smartphone-based human activity
recognition with able bodied, elderly, and stroke patients. PLoS ONE
2015
,10, e0124414. [CrossRef] [PubMed]
44.
Atallah, L.; Lo, B.; King, R.; Yang, G.-Z. Sensor positioning for activity recognition using wearable
accelerometers. IEEE Trans. Biomed. Circ. Syst. 2011,5, 320–329. [CrossRef] [PubMed]
45.
Könönen, V.; Mäntyjärvi, J.; Similä, H.; Pärkkä, J.; Ermes, M. Automatic feature selection for context
recognition in mobile devices. Pervasive Mob. Comput. 2010,6, 181–197.
46.
Choujaa, D.; Dulay, N. Tracme: Temporal activity recognition using mobile phone data. In Proceedings of
the IEEE/IFIP International Conference on Embedded and Ubiquitous Computing EUC’08, Shanghai, China,
17–20 December 2008; pp. 119–126.
47.
Kang, K.C. FODA: Twenty years of perspective on feature modeling. In Proceedings of the 4th International
Workshop on Variability Modelling of Software-Intensive Systems, Linz, Austria, 27–29 January 2010; pp. 1–9.
48.
Damaševiˇcius, R.; Štuikys, V.; Toldinas, J. Domain ontology-based generative component design using
feature diagrams and meta-programming technique. In Proceedings of the 2nd European Conference on
Software Architecture ECSA, Paphos, Cyprus, 29 September–1 October 2008; pp. 338–341.
Symmetry 2016,8, 100 20 of 20
49.
Achlioptas, D. Database-friendly random projections. In Proceedings of the ACM Symposium on the
Principles of Database Systems, Santa Barbara, CA, USA, 31 May–3 June 2001; pp. 274–281.
50.
Johnson, W.B.; Lindenstrauss, J. Extensions of Lipshitz mapping into Hilbert space. Contemp. Math.
1984
,26,
189–206.
51.
Parzen, E. On estimation of a probability density function and mode. Ann. Math. Stat.
1962
,33, 1065.
[CrossRef]
52.
Zhang, M.; Sawchuk, A.A. USC-HAD: A daily activity dataset for ubiquitous activity recognition using
wearable sensors. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, New York, NY,
USA, 1–4 December 2012; pp. 1036–1043.
53.
Mathie, M.; Celler, B.G.; Lovell, N.H.; Coster, A. Classification of basic daily movements using a triaxial
accelerometer. Med. Biol. Eng. Comput. 2004,42, 679–687. [CrossRef] [PubMed]
54.
Damaševiˇcius, R.; Vasiljevas, M.; Šalkeviˇcius, J.; Wo´zniak, M. Human activity recognition in AAL
environments using random projections. Comput. Math. Methods Med. 2016,2016. [CrossRef] [PubMed]
©
2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC-BY) license (http://creativecommons.org/licenses/by/4.0/).
... walking and gesture) and reported encouraging results, the need to use costly specialized devices for the data collection and a comprehensive set-up reduced the usefulness and increased the implementation cost in a real-world system. Recently, a growing body of studies [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27] have utilised smartphone accelerometer and gyroscope sensors for biometric-based authentication systems. However, the majority of these studies relied upon limited activities (i.e. ...
Article
Full-text available
User authentication is often regarded as the “gatekeeper” of cyber security. It has, however, long suffered from significant usability issues that have resulted in research focussing upon frictionless and transparent biometric approaches. Activity-based user authentication—a technique that authenticates a user by what they are physically doing at a specific point in time has attracted significant attention, particularly due to the increasing popularity of smartwatches. This research aims to overcome limitations in prior work by exploring the viability of the approach in real-world conditions. The study presents two principal experiments, one focused upon a constrained environment to provide a control and a second reflecting real-life. With over 1000 h of sampled data across 60 participants, the study sought to explore sensor, feature composition, and classifier design to explore the practical viability of the approach. Whilst the control experiment achieved best case Equal Error Rate of 0.29%, an improvement upon the prior art using optimisation, the best-case real-world results were not too far behind at 0.7%. This demonstrates that whilst the feature generated in the real-life experiment are subject to increased levels of noise, the performance is viable within the context of a transparent and continuous user authentication approach.
... Environmental monitoring and augmented reality also benefit from the data collection and processing capabilities of smartphone sensors [7]. Building upon these technological advancements, smartphones have also emerged as a robust means of human identification through gait recognition [8][9][10]. ...
Article
Full-text available
Gait monitoring using hip joint angles offers a promising approach for person identification, leveraging the capabilities of smartphone inertial measurement units (IMUs). This study investigates the use of smartphone IMUs to extract hip joint angles for distinguishing individuals based on their gait patterns. The data were collected from 10 healthy subjects (8 males, 2 females) walking on a treadmill at 4 km/h for 10 min. A sensor fusion technique that combined accelerometer, gyroscope, and magnetometer data was used to derive meaningful hip joint angles. We employed various machine learning algorithms within the WEKA environment to classify subjects based on their hip joint pattern and achieved a classification accuracy of 88.9%. Our findings demonstrate the feasibility of using hip joint angles for person identification, providing a baseline for future research in gait analysis for biometric applications. This work underscores the potential of smartphone-based gait analysis in personal identification systems.
... A comprehensive discussion is presented based on several scenarios: a. Clustering and classification algorithms Existing gait profile creation focuses on classification algorithms [26], [27]. However, using only classification algorithms requires a predefined model trained earlier. ...
Article
Full-text available
With the increasing popularity of the internet of things (IoT) application such as smart home, more data is being collected, and subsequently, concerns about preserving the privacy and confidentiality of these data are growing. When intruders attack and get control of smart home devices, privacy is compromised. Attribute-based encryption (ABE) is a new technique proposed to solve the data privacy issue in smart homes. However, ABE involves high computational cost, and the length of its ciphertext/private key increases linearly with the number of attributes, thus limiting the usage of ABE. This study proposes an enhanced ABE that utilises gait profile. By combining lesser number of attributes and generating a profiling attribute that utilises gait, the proposed technique solves two issues: computational cost and one-to-one encryption. Based on experiment conducted, computational time has been reduced by 55.27% with nine static attributes and one profile attribute. Thus, enhanced ABE is better in terms of computational time.
... The most common properties used in current identity recognition systems are mainly based on human biological characteristics, such as fingerprints, face recognition (both optical and infrared), iris scanning [26], DNA [27], keystroke entry patterns [28] and even gait [29]. However, they still have limited capability to deal with forgery. ...
Article
Full-text available
An EEG signal (Electroencephalogram) is a bioelectric phenomenon reflecting human brain activities. In this paper, we propose a novel deep learning framework ESML (EEG-based Subject Matching Learning) using raw EEG signals to learn latent representations for EEG-based user identification and tack classification. ESML consists of two parts: one is the ESML1 model via an LSTM-based method for EEG-user linking, and one is the ESML2 model via a CNN-based method for EEG-task linking. The new model ESML is simple, but effective and efficient. It does not require any restrictions for EEG data collection on motions and thinking for users, and it does not need any EEG preprocessing operations, such as EEG denoising and feature extraction. The experiments were conducted on three public datasets and the results show that ESML performs the best and achieves significant performance improvement when compared to baseline methods (i.e., SVM, LDA, NN, DTS, Bayesian, AdaBoost and MLP). The ESML1 model provided the best precision at 96% with 109 users and the ESML2 model achieved 99% precision at 3-Class task classification. These experimental results provide direct evidence that EEG signals can be used for user identification and task classification.
... Upon successful recognition of human activities, the training set is used for user identification retrieval and tested with the test set. All the performance metrics were iteratively recorded, and they managed to outperform the previous Random projection method [67] with an average performance gain of 1.41%. ...
Article
Full-text available
Human Activity Recognition (HAR) has gained much attention since sensor technology has become more advanced and cost-effective. HAR is a process of identifying the daily living activities of an individual with the help of an efficient learning algorithm and prospective user-generated datasets. This paper addresses the technical advancement and classification of HAR systems in detail. Design issues, future opportunities, recent state-of-the-art related works, and a generic framework for activity recognition are discussed in a comprehensive manner with analytical discussion. Different publicly available datasets with their features and incorporated sensors are also descr-processing techniques with various performance metrics like - Accuracy, F1-score, Precision, Recall, Computational times and evaluation schemes are discussed for the comprehensive understanding of the Activity Recognition Chain (ARC). Different learning algorithms are exploited and compared for learning-based performance comparison. For each specific module of this paper, a compendious number of references is also cited for easy referencing. The main aim of this study is to give the readers an easy hands-on implementation in the field of HAR with verifiable evidence of different design issues.
Article
Full-text available
Traditional one-time authentication mechanisms cannot authenticate smartphone users’ identities throughout the session – the concept of using behavioral-based biometrics captured by the built-in motion sensors and touch data is a candidate to solve this issue. Many studies proposed solutions for behavioral-based continuous authentication; however, they are still far from practicality and generality for real-world usage. To date, no commercially deployed implicit user authentication scheme exists because most of those solutions were designed to improve detection accuracy without addressing real-world deployment requirements. To bridge this gap, we tackle the limitations of existing schemes and reach toward developing a more practical implicit authentication scheme, dubbed MotionID, based on a one-class detector using behavioral data from motion sensors when users touch their smartphones. Compared with previous studies, our work addresses the following challenges: ① Global mobile average to dynamically adjust the sampling rate for sensors on any device and mitigate the impact of using sensors’ fixed sampling rate; ② Over-all-apps to authenticate a user across all the mobile applications, not only on-specific application; ③ Single-device-evaluation to measure the performance with multiple users’ (i.e., genuine users and imposters) data collected from the same device; ④ Rapid authentication to quickly identify users’ identities using a few samples collected within short durations of touching (1–5 s) the device; ⑤ Unconditional settings to collect sensor data from real-world smartphone usage rather than a laboratory study. To show the feasibility of MotionID for those challenges, we evaluated the performance of MotionID with ten users’ motion sensor data on five different smartphones under various settings. Our results show the impracticality of using a fixed sampling rate across devices that most previous studies have adopted. MotionID is able to authenticate users with an F1-score up to 98.5% for some devices under practical requirements and an F1-score up to roughly 90% when considering the drift concept and rapid authentication settings. Finally, we investigate time efficiency, power consumption, and memory usage considerations to examine the practicality of MotionID.
Article
Full-text available
In the field of machine intelligence and ubiquitous computing, there has been a growing interest in human activity recognition using wearable sensors. Over the past few decades, researchers have extensively explored learning-based methods to develop effective models for identifying human behaviors. Deep learning algorithms, known for their powerful feature extraction capabilities, have played a prominent role in this area. These algorithms can conveniently extract features that enable excellent recognition performance. However, many successful deep learning approaches have been built upon complex models with multiple hyperparameters. This paper examines the current research on human activity recognition using deep learning techniques and discusses appropriate recognition strategies. Initially, we employed multiple convolutional neural networks to determine an effective architecture for human activity recognition. Subsequently, we developed a hybrid convolutional neural network that incorporates a channel attention mechanism. This mechanism enables the network to capture deep spatio-temporal characteristics in a hierarchical manner and distinguish between different human movements in everyday life. Our investigations, using the UCI-HAR, WISDM, and IM-WSHA datasets, demonstrated that our proposed model, which includes cross-channel multi-size convolution transformations, outperformed previous deep learning architectures with accuracy rates of 98.92%, 98.80%, and 98.45% respectively. These results indicate that the suggested model surpasses state-of-the-art approaches in terms of overall accuracy, as supported by the research findings.
Article
Full-text available
Automatic human activity recognition systems aim to capture the state of the user and its environment by exploiting heterogeneous sensors attached to the subject’s body and permit continuous monitoring of numerous physiological signals reflecting the state of human actions. Successful identification of human activities can be immensely useful in healthcare applications for Ambient Assisted Living (AAL), for automatic and intelligent activity monitoring systems developed for elderly and disabled people. In this paper, we propose the method for activity recognition and subject identification based on random projections from high-dimensional feature space to low-dimensional projection space, where the classes are separated using the Jaccard distance between probability density functions of projected data. Two HAR domain tasks are considered: activity identification and subject identification. The experimental results using the proposed method with Human Activity Dataset (HAD) data are presented.
Article
Full-text available
This paper presents a review of different classification techniques used to recognize human activities from wearable inertial sensor data. Three inertial sensor units were used in this study and were worn by healthy subjects at key points of upper/lower body limbs (chest, right thigh and left ankle). Three main steps describe the activity recognition process: sensors’ placement, data pre-processing and data classification. Four supervised classification techniques namely, k-Nearest Neighbor (k-NN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM), and Random Forest (RF) as well as three unsupervised classification techniques namely, k-Means, Gaussian mixture models (GMM) and Hidden Markov Model (HMM), are compared in terms of correct classification rate, F-measure, recall, precision, and specificity. Raw data and extracted features are used separately as inputs of each classifier. The feature selection is performed using a wrapper approach based on the RF algorithm. Based on our experiments, the results obtained show that the k-NN classifier provides the best performance compared to other supervised classification algorithms, whereas the HMM classifier is the one that gives the best results among unsupervised classification algorithms. This comparison highlights which approach gives better performance in both supervised and unsupervised contexts. It should be noted that the obtained results are limited to the context of this study, which concerns the classification of the main daily living human activities using three wearable accelerometers placed at the chest, right shank and left ankle of the subject.
Article
Activity recognition based on data from mobile wearable devices is becoming an important application area for machine learning. We propose a novel approach based on a combination of feature extraction using time-delay embedding and supervised learning. The computational requirements are considerably lower than existing approaches, so the processing can be done in real time on a low-powered portable device such as a mobile phone. We evaluate the performance of our algorithm on a large, noisy data set comprising over 50 hours of data from six different subjects, including activities such as running and walking up or down stairs. We also demonstrate the ability of the system to accurately classify an individual from a set of 25 people, based only on the characteristics of their walking gait. The system requires very little parameter tuning, and can be trained with small amounts of data.
Book
Biometric recognition, or simply biometrics, is a rapidly evolving field with applications ranging from accessing one's computer, to gaining entry into a country. Biometric systems rely on the use of physical or behavioral traits, such as fingerprints, face, voice and hand geometry, to establish the identity of an individual. The deployment of large-scale biometric systems in both commercial (e.g., grocery stores, amusement parks, airports) and government (e.g., US-VISIT) applications, increases the public's awareness of this technology. This rapid growth also highlights the challenges associated with designing and deploying biometric systems. Indeed, the problem of biometric recognition is a grand challenge in its own right. The past five years have seen a significant growth in biometric research resulting in the development of innovative sensors, robust and efficient algorithms for feature extraction and matching, enhanced test methodologies and novel applications. These advances have resulted in robust, accurate, secure and cost effective biometric systems. The Handbook of Biometrics -- an edited volume by prominent invited researchers in biometrics -- describes the fundamentals as well as the latest advancements in the burgeoning field of biometrics. It is designed for professionals, practitioners and researchers in biometrics, pattern recognition and computer security. The Handbook of Biometrics can be used as a primary textbook for an undergraduate biometrics class. This book is also suitable as a secondary textbook or reference for advanced-level students in computer science.
Conference Paper
We show that accelerometers, touch screens and software keyboards, which are standard components of modern mobile phones, can be used to differentiate different test subjects based on the unique interaction characteristics of each subject. This differentiation ability can be applied to authenticate individuals under a continuous authentication scheme. Based on six 15 minute data sets collected from the test subjects utilizing our data collection platform, we extract multiple features from the data and show an ability to accurately identify individuals at a rate of 83 percent using a simple normal distribution of each feature.
Conference Paper
Continuously and unobtrusively identifying the phone's owner using accelerometer sensing and gait analysis has a great potential to improve user experience on the go. However, a number of challenges, including gait modeling and training data acquisition, must be addressed before unobtrusive gait verification is practical. In this paper, we describe a gait verification system for mobile phone without any assumption of body placement or device orientation. Our system uses a combination of supervised and unsupervised learning techniques to verify the user continuously and automatically learn unseen gait pattern from the user over time. We demonstrate that it is capable of recognizing the user in natural settings. We also investigated an unobtrusive training method that makes it feasible to acquire training data without explicit user annotation.
Chapter
Biometric recognition, or simply biometrics, is the science of establishing the identity of a person based on physical or behavioral attributes. It is a rapidly evolving field with applications ranging from securely accessing ones computer to gaining entry into a country. While the deployment of large-scale biometric systems in both commercial and government applications has increased the public awareness of this technology, "Introduction to Biometrics" is the first textbook to introduce the fundamentals of Biometrics to undergraduate/graduate students. The three commonly used modalities in the biometrics field, namely, fingerprint, face, and iris are covered in detail in this book. Few other modalities like hand geometry, ear, and gait are also discussed briefly along with advanced topics such as multibiometric systems and security of biometric systems. Exercises for each chapter will be available on the book website to help students gain a better understanding of the topics and obtain practical experience in designing computer programs for biometric applications. These can be found at: http://www.csee.wvu.edu/~ross/BiometricsTextBook/.Designed for undergraduate and graduate students in computer science and electrical engineering, "Introduction to Biometrics" is also suitable for researchers and biometric and computer security professionals.