ChapterPDF Available

Modeling the Complexity of Signature and Touch-Screen Biometrics using the Lognormality Principle

Authors:

Abstract and Figures

This paper focuses on modeling the complexity of biomechanical tasks through the usage of the Sigma LogNormal model of the Kinematic Theory of rapid human movements. The Sigma LogNormal model has been used for several applications , in particular related to modeling and generating synthetic handwritten signatures in order to improve the performance of automatic verification systems. In this paper we report experimental work for the usage of the Sigma LogNormal model to predict the complexity of biomechanical tasks on two case studies: 1) on-line signature recognition in order to generate user-based complexity groups and develop specific verification systems for each of them, and 2) detection of age groups (children from adults) using touch screen patterns. The results achieved show the benefits of using the Sigma LogNormal model for modeling the complexity of biomechanical tasks in the two case studies considered.
Content may be subject to copyright.
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 1
Chapter 1
Modeling the Complexity of Signature and Touch-Screen Biometrics
using the Lognormality Principle
Ruben Vera-Rodriguez, Ruben Tolosana, Javier Hernandez-Ortega, Alejandro
Acien, Aythami Morales, Julian Fierrez and Javier Ortega-Garcia
BiDA Lab Biometrics and Data Pattern Analytics Laboratory, Universidad
Autonoma de Madrid, Madrid, Spain
(ruben.vera, ruben.tolosana, javier.hernandezo, alejandro.acien,
aythami.morales, julian.fierrez, javier.ortega)@uam.es
This paper focuses on modeling the complexity of biomechanical tasks through
the usage of the Sigma LogNormal model of the Kinematic Theory of rapid hu-
man movements. The Sigma LogNormal model has been used for several ap-
plications, in particular related to modeling and generating synthetic handwritten
signatures in order to improve the performance of automatic verification systems.
In this paper we report experimental work for the usage of the Sigma LogNormal
model to predict the complexity of biomechanical tasks on two case studies: 1)
on-line signature recognition in order to generate user-based complexity groups
and develop specific verification systems for each of them, and 2) detection of age
groups (children from adults) using touch screen patterns. The results achieved
show the benefits of using the Sigma LogNormal model for modeling the com-
plexity of biomechanical tasks in the two case studies considered.
1. Introduction
On-line signature verification and other handwritten tasks (drawings, touch pat-
terns, etc.) are experiencing a high development recently due to the technological
evolution of digitizing devices, including smartphones and tablets. Such handwrit-
ten data can be applied to many applications in different sectors such as security,
e-government, healthcare, education, user profiling, advertising or banking.1–4
This paper focuses on modeling the complexity of handwritten information,
which can be a very important factor in different applications related to hand-
writing. We propose to model the complexity of handwritten tasks through the
usage of the Sigma LogNormal model of the Kinematic Theory of rapid human
movements.5The Sigma LogNormal model has been used in the past for several
1
This is a pre-print of an article to be published in the book:
The Lognormality Principle and its Applications, R. Plamondon et al. (Eds.),
World Scientific, 2019.
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 2
2R. Vera-Rodriguez, R. Tolosana, J. Hernandez-Ortega, A. Acien, A. Morales, J. Fierrez and J. Ortega-Garcia
applications. One of the most successful ones has been the synthetic generation
of handwriting, in particular signatures (two examples in6and7). This model has
recently been used in8and9not to generate synthetic signature samples, but to
improve the performance of traditional signature verification systems. In8the au-
thors proposed a skilled forgery detector using some features extracted from the
Sigma LogNormal model whereas in,9a new set of features based on the Sigma
LogNormal model was proposed achieving very good performance.
In this paper we report experimental work for the usage of the Sigma LogNor-
mal model to predict the complexity of biomechanical tasks on two case studies:
1) The first one describes its application to on-line signatures in order to generate
user-based complexity groups (as there are users with very complex signatures
and others with very simple ones). Then, a specific signature verification system
is developed for each complexity group achieving very significant improvements
of verification performance.10 2) On the other hand, the second one describes
its application to detect age groups (children from adults) in touch dynamic tasks
performed on smartphones or tablets,11 as the difference between adults and chil-
dren is mainly caused by the different maturity of their anatomy and neuromotor
system. These are less mature in children, so they have worse manual dexterity
causing rougher movements.5,12
The remainder of the paper is organized as follows. Sec. 2 describes the Sigma
LogNormal model, used in this work to model the complexity of handwritten
tasks. Sect. 3 describes the first case study focused on modeling the complexity
of on-line signatures and its experimental results. Sect. 4 describes the second
case study focused on modeling the complexity of touch dynamic information in
order to detect age groups and its experimental results. Finally, Sec. 5 draws the
final conclusions and points out some lines for future work.
2. The Sigma LogNormal Model
Many models have been proposed to analyze human movement patterns in general
and handwriting in particular. These models allow the analysis of features related
to motor control processes and the neuromuscular response, providing comple-
mentary features to the traditional Xand Ycoordinates related to handwriting
tasks. One of the most well known writing generation models is the Sigma Log-
Normal model.5,13
The Sigma LogNormal model decomposes the complex signals that describe
the speed of muscular movements into simpler ones that can be explained by a
few parameters. These parameters contain information about the activity itself
and about the neuromotor skills of the person.14 In particular, the Sigma Log-
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 3
Modeling the Complexity of Signature and Touch-Screen Biometrics using the Lognormality Principle3
Fig. 1. Trace and velocity profile of one reconstructed on-line signature using the Sigma LogNormal
model. A single stroke of the signature and its corresponding lognormal profile are highlighted in red
colour. Individual strokes are segmented within the LogNormal algorithm.5
Normal model states that the velocity profile of human hand movements can be
decomposed into strokes. Moreover, the velocity of each of these strokes, i, can
be described with a speed signal vi(t)that has a lognormal shape:
|vi(t)|=Di
p2πσi(tt0i)exp((ln(tt0i)µi)2
2σ2
i
)(1)
where each of the parameters are described in Table 1. The complete velocity
profile is modelled as a sum of the different individual stroke velocity profiles as:
vr(t) =
N
X
i=1
vi(t)(2)
where Nis the number of lognormals of the entire movement. A complex
action, like a handwritten signature or touch task, is a summation of these lognor-
mals, each one characterized by different values for the six parameters in Table
1. Fig. 1 shows an example of the lognormal velocity profiles extracted for each
stroke of one signature.
3. Case Study 1: On-Line Signature Complexity
Signature verification systems have been shown to be highly sensitive to signature
complexity.15 In,16 Alonso-Fernandez et al. evaluated the effect of the complexity
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 4
4R. Vera-Rodriguez, R. Tolosana, J. Hernandez-Ortega, A. Acien, A. Morales, J. Fierrez and J. Ortega-Garcia
Table 1. Sigma LogNormal parameters description.
Parameter Description
DiInput pulse: covered distance when executed isolated.
t0iInitialization time. Displacement in the time axis.
µiLogtemporal delay.
σiImpulse response time of the neuromotor system.
θsi Initial angular position of the stroke.
θei Final angular position of the stroke.
and legibility of the signatures for off-line signature verification (i.e. signatures
with no available dynamic information) pointing out the differences in perfor-
mance for several matchers. Signature complexity has also been associated to the
concept of entropy, defining entropy as the inherent information content of bio-
metric samples.17,18 In19 a “personal entropy” measure based on Hidden Markov
Models (HMM) was proposed in order to analyse the complexity and variability of
on-line signatures regarding three different levels of entropy. In addition, the same
authors have recently proposed in20 a new metric known as ”relative entropy” for
classifying users into animal groups where skilled forgeries are also considered.
Despite all the studies performed regarding on-line signature as a biometric trait,
none of them have exploited, as far as we are aware, the concept of complexity in
order to develop more robust and accurate on-line signature verification systems.
3.1. Proposed System
The architecture of our proposed system is shown in Fig. 2. Based on the parame-
ters of the Sigma LogNormal model, we propose to use the number of lognormals
(N) that models each signature as a measure of the complexity level of the signa-
ture. Once this parameter is extracted for all available genuine signatures of the
enrolment phase, the user is classified into a complexity level using the majority
voting algorithm (low, medium and high complexity levels). Only genuine signa-
tures are considered in our proposed approach for measuring the complexity level.
The advantage of this approach is that the signature complexity detector can be
performed off-line thereby avoiding time consuming delays and making it feasible
to apply in real time scenarios.
Then, after having classified a given user into a complexity group, a specific
on-line signature verification module based on time functions (a.k.a. local sys-
tem)21 has been adapted to each signature complexity level. For each signature
acquired, signals related to Xand Ypen coordinates are used to extract a set of
23 time functions, similar to22 (see Table 2). The most discriminative and robust
time functions of each complexity level are selected using the Sequential Forward
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 5
Modeling the Complexity of Signature and Touch-Screen Biometrics using the Lognormality Principle5
Identity claim
Low Personal Entropy System
Similarity
Computation
Similarity
Computation
Similarity
Computation
Similarity
Computation
Similarity
Computation
Similarity
Computation
Pre-Processing Time
Funct. Extraction
Low Personal Entropy System
Medium Personal Entropy SystemMedium Personal Entropy System
High Personal Entropy SystemHigh Personal Entropy System
Accepted or
Rejected
Enrollment SignaturesEnrollment Signatures
Sigma LogNormal
Features Extraction
Classif er
Classif er
Personal
Entropy
Detector
Pre-Processing
Pre-ProcessingPre-Processing
Pre-Processing
Pre-Processing
Time
Funct. Extraction
Time
Funct. Extraction
Time
Funct. Extraction
Time
Funct. Extraction
Time
Funct. Extraction
Sigma LogNormal
Features Extraction Decision
Threshold
Decision
Threshold
Decision
Threshold
Decision
Threshold
Decision
Threshold
Decision
Threshold
Fig. 2. Architecture of our proposed methodology focused on the development of an on-line signature
verification system adapted to the signature complexity level.
Feature Selection algorithm (SFFS) enhancing the signature verification system
in terms of EER.
The local system considered in this work for computing the similarity between
the time functions from the input and training signatures is based on DTW algo-
rithm.23 Scores are obtained as:
score =eD/K (3)
where Dand Krepresent respectively the minimal accumulated distance and
the number of points aligned between two signatures using DTW algorithm.
3.2. Database and Experimental Protocol
In this case, BiosecurID database24 is considered. Signatures were acquired from
a total of 400 users using a Wacom Intuos 3 pen tablet with a resolution of 5080
dpi and 1024 pressure levels. The database comprises 16 genuine signatures and
12 skilled forgeries per user, captured in 4 separate acquisition sessions. Each
session was captured leaving a two month interval between them, in a controlled
and supervised office-like scenario. Signatures were acquired using a pen stylus.
The available information within each signature is: Xand Ypen coordinates and
pressure. In addition, pen-up trajectories are available.
The experimental protocol has been designed to allow the study of different
signature complexity levels in the system performance. Two main experiments
are carried out: 1) evaluation of the signature complexity detector proposed in this
work in order to classify users into different complexity levels, and 2) evaluation
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 6
6R. Vera-Rodriguez, R. Tolosana, J. Hernandez-Ortega, A. Acien, A. Morales, J. Fierrez and J. Ortega-Garcia
Table 2. Set of time functions considered in this work.
# Feature
1 x-coordinate: xn
2 y-coordinate: yn
3 Pen-pressure: zn
4 Path-tangent angle: θn
5 Path velocity magnitude: vn
6 Log curvature radius: ρn
7 Total acceleration magnitude: an
8-14 First-order derivate of features 1-7:
˙xn,˙yn,˙zn,˙
θn,˙vn,˙ρn,˙an
15-16 Second-order derivate of features 1-2: ¨xn,¨yn
17 Ratio of the minimum over the maximum speed over a 5-
samples window: vr
n
18-19 Angle of consecutive samples and first order difference: αn,
˙αn
20 Sine: sn
21 Cosine: cn
22 Stroke length to width ratio over a 5-samples window: r5
n
23 Stroke length to width ratio over a 7-samples window: r7
n
of the proposed approach based on a separate on-line signature verification system
adapted to each signature complexity level.
For the first experiment, our proposed signature complexity detector is ana-
lyzed using all available users from BiosecurID. For the second experiment, the
BiosecurID database is split into development dataset (40% of the users) and eval-
uation dataset (the remaining 60% of the users). The development dataset is con-
sidered in order to select the most discriminative and robust time functions for
each signature complexity level using the SFFS algorithm whereas the evaluation
dataset is considered for the evaluation of the proposed system. Both skilled and
random forgeries are considered using the 4 signatures from the enrolment session
as reference signatures and the remaining 12 genuine signatures and 12 skilled
forgeries signatures as the test. The final score is obtained after performing the
average score of the four one-to-one comparisons.
3.3. Results
3.3.1. Analysis of the Signature Complexity Detector:
The first experiment was designed to evaluate the proposed approach for signature
complexity detection. For this, the signature complexity detector was performed
in two different steps. First, each user of the BiosecurID database was manu-
ally labelled in a signature complexity level (low, medium, high). This process
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 7
Modeling the Complexity of Signature and Touch-Screen Biometrics using the Lognormality Principle7
0 10 20 30 40 50 60
Number of Lognormals
0
0.05
0.1
0.15
Probability Density Function
Low Complexity
Medium Complexity
High Complexity
Fig. 3. Probability density function of the number of lognormals for each complexity level using all
genuine signatures of the BiosecurID database. The three proposed complexity-dependent decision
thresholds are highlighted by black dashed lines.
was carried out by manually labelling the image of just one genuine signature
per user. This was performed by two annotators and two times each in order to
keep consistency on the results. Three different complexity levels were consid-
ered based on previous works.20 Users with signatures longer in writing time and
with an appearance more similar to handwriting were labelled as high-complexity
users whereas those users with signatures shorter in time and with generally sim-
ple flourish with no legible information were labelled as low-complexity users.
This first stage served as a ground truth. Following this stage, the Sigma Log-
Normal parameter Nwas extracted for each available genuine signature of the
BiosecurID database (i.e. a total of 400 ×16 = 6400 genuine signatures). Then,
we represented for each complexity level their corresponding distribution of log-
normals according to the ground truth performed during the first stage. Fig. 3
shows the distributions of the number of lognormals obtained for each complexity
level using all genuine signatures of the BiosecurID database. The three proposed
complexity-dependent decision thresholds are highlighted by black dashed lines
and were selected in order to minimize the number of misclassifications between
different signature complexity levels. Signatures with lognormal values equal or
less than 17 are classified as low-complexity signatures whereas those signatures
with more than 27 lognormals are classified into the high-complexity group. Oth-
erwise, signatures are categorized into medium-complexity level. Fig. 4 shows
some of the signatures classified into each complexity level.
We now analyse each resulting complexity level following the same proce-
dure proposed in:20 analysing the system performance for different complexity
groups considering only Xand Ypen coordinates. It is important to remark that
each user is classified into a complexity level applying the majority voting algo-
rithm to all available enrolment signatures of the user. Table 3 shows the system
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 8
8R. Vera-Rodriguez, R. Tolosana, J. Hernandez-Ortega, A. Acien, A. Morales, J. Fierrez and J. Ortega-Garcia
Fig. 4. Signatures categorized for each complexity level using our proposed signature complexity
detector. From top to bottom: low, medium and high complexity.
performance for each complexity level in terms of EER(%). The results show dif-
ferent system performance regarding the signature complexity level. Users with a
high complexity level have an absolute improvement of 4.3% compared to users
categorized into a low complexity level for skilled forgeries. Therefore, the idea
of considering a different optimal on-line signature verification system for each
signature complexity level is analysed in next sections in order to select the most
discriminative and robust time functions for each complexity group and reduce
the system performance.
3.3.2. Time-Functions Selection for the Complexity-based Signature
Verification System:
First we analyse which are the most discriminative and robust time functions for
each signature complexity level using the SFFS algorithm over the development
dataset. The following three cases are studied:
(1) Time functions selected for all three signature complexity levels.
(2) Time functions selected only for medium and high signature complexity lev-
els.
(3) Time functions selected only for low and medium signature complexity levels.
For the first case, the time functions ˙zn,˙anand vr
n(see Table 2) have been
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 9
Modeling the Complexity of Signature and Touch-Screen Biometrics using the Lognormality Principle9
Table 3. Experiment 1: System performance results (EER
in %) of the BiosecurID database of each personal complex-
ity level.
Low C. Medium C. High C
Skilled forgeries
Random forgeries
22.2
3.6
21.7
2.4
17.9
2.6
selected in all systems as robust time functions regardless of the signature com-
plexity level. These time functions are the variation of pressure, variation of ac-
celeration and ratio of the minimum over the maximum speed and provide general
and valuable information to all signature verification systems about the knowledge
and speed of the users performing their signatures. For the second case, the time
functions ˙vn,¨ynand ˙αnhave been selected for both medium and high signature
complexity levels. These time functions provide information related to the vari-
ation of the velocity, vertical acceleration and variation of angle, time functions
more related to the geometry of characters and therefore, with the handwriting. Fi-
nally, the time function cnis the only one selected for the third case and provides
information related to the angles as signatures with low and medium complexity
level are usually categorized for having simple flourishes with no legible informa-
tion. It is important to highlight that the time function ¨ynis not selected for users
with low signature complexity level. In other studies such as,25 this time function
was selected in most optimal systems. However, the vertical acceleration seems
not to be very discriminative for users with low signature complexity level as their
signatures are usually simpler and not related to handwriting.
3.3.3. Experimental Results of the Complexity-based Signature Verifica-
tion System:
The second part of the experimental work was focused on developing a specific
verification system for each group of signature complexity. For this, the SFFS
algorithm was applied to the development dataset in order to find the most dis-
criminative time functions for each complexity group. Then, the evaluation of the
proposed system was compared to a baseline system based on DTW and the same
system (same time functions) for all complexity groups, similar to the baseline
system presented in.8
Table 4 shows the evaluation results achieved considering our proposed
approach based on personal entropy on-line signature verification systems.
Analysing the results obtained, our Proposed Systems achieve an average abso-
lute improvement of 2.5% EER compared to the Baseline System for the case of
skilled forgeries. It is important to note that for the most challenging users (users
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 10
10R. Vera-Rodriguez, R. Tolosana, J. Hernandez-Ortega, A. Acien, A. Morales, J. Fierrez and J. Ortega-Garcia
Table 4. Experiment 2: System performance results (EER in %) on the evaluation dataset for each
signature complexity level.
Low C. Medium C. High C.
Baseline Proposed Baseline Proposed Baseline Proposed
Skilled forgeries
Random forgeries
13.8
1.5
10.1
1.3
7.5
0.7
5.2
0.5
6.2
0.9
4.6
0.9
with high personal entropy level), our proposed approach achieves an absolute
improvement of 3.7% EER compared to the Baseline System. Analysing the re-
sults obtained for the random forgery cases, our Proposed Systems also achieves
improvements for all personal entropy levels. For this case, the improvement has
been lower than for skilled forgery cases due to its low values and the way that the
SFFS algorithm was applied during the training of the systems (focused on skilled
forgery cases). Results obtained after applying our proposed approach based on
personal entropy on-line signature verification systems outperform the results of
the state-of-the-art for the BiosecurID database. In,8the authors achieved an ab-
solute improvement of 1.0% EER for skilled forgery cases whereas our proposed
approach achieves an average absolute improvement of 2.5% EER compared to
the same Baseline System.
Fig. 5. Experiment 2: Analysis of the False Rejection Rate (FRR) at different values of False Ac-
ceptance Rate (FAR) for both Proposed and Baseline Systems on the whole evaluation dataset.
For completeness, Fig. 5 shows the performance of the Baseline and Proposed
Systems considering all personal entropy levels together in terms of the false re-
jection rate (FRR) at different values of false acceptance rate (FAR). Our Proposed
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 11
Modeling the Complexity of Signature and Touch-Screen Biometrics using the Lognormality Principle11
Systems achieve a final value of 5.8% FRR for a FAR = 5.0% and 3.9% FRR for
a FAR = 10.0%. These results show the importance of considering different sig-
nature verification systems for each personal entropy level in order to enhance the
verification systems with more robust time functions.
4. Case Study 2: Predicting Age Groups from Touch Patterns
Age groups prediction based on handwritten touch patterns acquired from touch-
screen devices such as smartphones or tables is a recent and important challenge.
Touchscreen devices provide mobile access to an unlimited number of digital con-
tents and services (e.g. more than a half of YouTube visits come from mobile de-
vices and this percentage is increasing26). Digital services are used by people from
everywhere, all ages, all ethnicities and all socioeconomic status. In this context,
the classification of users according to geographic and demographic attributes is
crucial for service personalization (e.g. recommender systems, parental control,
security).27 Some of these attributes can be obtained from metadata associated
to the device (e.g. IP address, language selection, GPS location) or can be in-
ferred from the user behavior (e.g. browsing history, social network contents, and
keystroke dynamics).28 We want to highlight the spread of the use of this kind of
devices by young children. The study in29 reveals that 97% of US children under
the age of four use mobile devices, regardless of family income.
In this case study we analyze a way to classify users of touch panels according
to two age groups (children and adults). The age is a key attribute in user pro-
filing with direct application on several automatic systems (e.g. parental control,
recommender systems, advertising). Three examples of use cases are: i) lock-
ing content and/or applications: locking some services in tablets and smartphones
when children are using them, i.e. buying new applications or sensitive content;
ii) user’s age study by service providers: this way service providers could develop
new content that fits better to their actual audience; iii) real-time interface adapt-
ing: as children have worse control of their fine movements than adults, changing
default interfaces to special tailored ones could be beneficial.
The most popular method to reveal the age of the user is based on an online
questionnaire in which the user directly answers questions about his age. How-
ever, this solution assumes: i) honesty on the response of the users, and ii) users
can read. Both assumptions cannot be guaranteed because of many practical rea-
sons. Besides the fact that people lie, nowadays children start to use digital plat-
forms and services before learning to read.
In the existing literature, there are many experiments exploring the use of tech-
nology by children, seeking how to improve the design of adapted interfaces and
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 12
12R. Vera-Rodriguez, R. Tolosana, J. Hernandez-Ortega, A. Acien, A. Morales, J. Fierrez and J. Ortega-Garcia
applications.30 However, modeling and characterizing mathematically how chil-
dren interact with touch devices and how their conduct differs from the adult’s
one is a field that has not been studied deeply enough. A work related to this topic
is31 where they analyzed different types of touching tasks like tap, rotate or drag
and drop, and they found that children have different success rates when trying to
perform different tasks. Simple tasks - for example tapping - can be done by all
children without any problem, but the more complex ones are very difficult to be
completed by very young children.
In,32 they measured the touch patterns of children and compared it to patterns
from adults. They discovered that children have a larger miss rate than adults when
trying to hit small targets. In33 tap tasks are used to extract time and precision-
based features. They designed two different approaches, using only one tap for
classification and using 7 consecutive tasks. They get high accuracy rates: 86.5%
in the one tap approximation, and 99% of accuracy using 7 consecutive taps to
combine their scores. Even though they get good results using tap tasks, we de-
cide to use drag and drop tasks because the differences between the neuromotor
development of users can be manifested in a better way. The direct comparison
between approaches is not fair because we are using different tasks/information
to classify users. However, in our work we demonstrate that using a very com-
mon and fast action (e.g. unlock screen based on drag and drop) we can achieve
higher classification rates that those achieved in33 for the one task approach (the
second approach has not been implemented yet). In our opinion, both approaches
are complementary, have very different nature, and can be combined to achieve
higher performances.
The difference between adults and children is mainly caused by the differ-
ent maturity of their anatomy and neuromotor system. These are less mature in
children, so they have worse manual dexterity causing rougher movements.12,34
In order to characterize the interaction of children and adults with touchscreen
devices, we propose to use a model of the human neuromotor system. The Sigma
LogNormal theory of rapid human movements represents complex movements
with an analytic model that describes some physical and cognitive features of hu-
man beings.35,36 Studies like14 have proved that the Sigma LogNormal model can
be used to characterize children handwriting. They conclude that there are two
main groups of children separable by looking at their learning stage. Children’s
neuromotor skills become more similar to the adults’ skills when they grow up,
namely, when they finish their preoperational stage. At age 10 children know how
to activate each little muscle properly to produce determinate fine movements.37
As they are based on the same neuromotor skills, the principles applied to hand-
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 13
Modeling the Complexity of Signature and Touch-Screen Biometrics using the Lognormality Principle13
Table 5. Sigma LogNormal features extracted.
Space-based features Time-based features
f1=Dif8= ∆t0=t0it0i1
f2=µif9=v2=|vi(t2i)|
f3=σif10 =v3=|vi(t3i)|
f4= sin(θsi)f11 =v4=|vi(t4i)|
f5= cos(θsi)f12 =δt05 =t5it0i
f6= sin(θei)f13 =δt15 =t5it1i
f7= cos(θei)f14 =δt13 =t3it1i
f15 =δt35 =t5it3i
f16 =δt24 =t4it2i
f17 = ∆t1=t1it1i1
f18 = ∆t3=t3it3i1
writing models can be also used to model touchscreen patterns.
In this case study we propose the use of the Sigma LogNormal model to de-
tect age groups as simple application of the model to drag and drop touch tasks
showed large differences between adults and children velocity profiles. In par-
ticular, this case study is focused in age classification of users into two groups:
children under 6 years old and adults. We use information of simple touch tasks
collected from 119 people (89 children and 30 adults) using two different types of
devices: a smartphone and a tablet. Single-sensor and cross-sensor scenarios have
been evaluated. The results show accuracies over 90% in several scenarios with
top correct classification rate of 96% for the data obtained from tablets.
4.1. Proposed System
In this case, a more complex system was developed compared to Case Study 1 in
order to predict age groups from drag and drop touch tasks, as the main focus here
was to optimize the final classification result.
The parameters of the Sigma LogNormal model (as described in Sect. 2)
were used to calculate 18 different features per lognormal (see Table 5) as de-
scribed in.35 These features can be classified into two groups: space-based and
time-based. Space-based features are those that give information about the spa-
tial distribution of the strokes, such as Di,µi,σi, and other features based in θsi
and θei (see Table 1). Time based features are composed by the values of speed at
some relevant points of the strokes like their maximum or inflexion points; and the
time-offsets between those points. The task time and the number of lognormals in
each task have been added as additional features.
It is worth noting that the lognormals with amplitude value lower than a thresh-
old were discarded. Then, the 18 features from35 are computed for each stroke,
and each parameter is averaged across strokes. The 18 averaged parameters are
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 14
14R. Vera-Rodriguez, R. Tolosana, J. Hernandez-Ortega, A. Acien, A. Morales, J. Fierrez and J. Ortega-Garcia
Fig. 6. Comparison between Sigma LogNormal speed profiles for (a) an adult and (b) a child follow-
ing the same task.
augmented with the task time and the number of strokes to generate the final fea-
ture vector of size 20.
Regarding the classification of the age of the user, quite often it is possible to
differentiate between children and adults by simply looking at the velocity pro-
file of a touch screen task. In Figure 6, an example of these types of profiles is
presented, consisting in performing a drag and drop task in both cases. A visual
comparison between children and adults velocity profiles shows that children’s
signals are usually composed by a higher number of strokes than the adults’ ones,
and therefore have a higher degree of complexity.
Figures 7(a) and 7(b) show the histograms of two features (Covered distance
f1, and Logtemporal delay f2) for children and adults. These two features are
highly discriminative as their histograms are clearly separated, showing differ-
ences between both classes and therefore suggesting the potential for the classifi-
cation task.
As a classifier we use a SVM (Support Vector Machine) with a RBF (Radial
Basis Function) kernel because of its good general performance in binary classifi-
cation tasks and the few number of parameters to configure.
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 15
Modeling the Complexity of Signature and Touch-Screen Biometrics using the Lognormality Principle15
0.4999 0.49995 0.5 0.50005 0.5001 0.50015 0.5002
Covered distance "D" (f1)
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Probability Density Function
TABLET
children
adults
0.49985 0.4999 0.49995 0.5 0.50005 0.5001 0.50015
µ (f2)
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
Probability Density Function
TABLET
children
adults
Fig. 7. Probability Density Functions for two features (f1and f2). These are highly discriminative
features as histograms are separated.
4.2. Database and Experimental Protocol
The database used is publicly available and was presented in.37 It is comprised
with data from touchscreen activity of both children and adults performing pre-
designed tasks in an ad-hoc app. In the present work, we have used the data from
singletouch and multitouch drag and drop activities. Drag and drop activities con-
sist of picking one object on the device screen and moving it to a target area.
Multidevice information is available as the users have completed the tasks both
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 16
16R. Vera-Rodriguez, R. Tolosana, J. Hernandez-Ortega, A. Acien, A. Morales, J. Fierrez and J. Ortega-Garcia
Table 6. Accuracy results for the 20 lognormal features. The accuracy is measured as the rate of correct classifications considering
both classes.
Testing samples
Phone Singletouch Tablet Singletouch Phone Multitouch Tablet Multitouch
Phone Singletouch 93.6% 95.0% 88.0% 92.1%
Tablet Singletouch 93.7% 96.3% 88.9% 94.0%
Phone Multitouch 94.1% 95.9% 88.0% 92.8%
Traning samples
Tablet Multitouch 93.0% 96.3% 87.9% 94.6%
in a smartphone and in a tablet. Both single-sensor and cross-sensor tasks are
analyzed.
The dataset is composed by 89 children between 3 and 6 years old and 30
young adults under 25 years old. The mean age of the children is 4.6 years. The
total number of drag and drop tasks is 2912 for children and 1157 for adults (see37
for more details).
As the experimental protocol, the database was divided randomly into training
(60%) and testing (40%). The random selection was repeated 50 times and the
final performance is presented in terms of averaged correct classification accuracy.
4.3. Results
Table 6 shows the accuracies obtained according to the different scenarios. They
are presented in terms of correct classification accuracy (percentage of samples
from both classes correctly classified).
The mean value of accuracy having into account all the evaluated scenarios
is 92.8%. The classification rates are over 96% in a single-sensor setting and
over 95% in a cross-sensor scenario. The best results are obtained with tablets as
sensors, while using smartphone’s data slightly degrades the results.
Compared with33 where they get an accuracy rate of 86.5% using one tap task
for classification and with a single-sensor aproximation (using smartphone’s data),
our system performs better, getting a 93.6% of accuracy using only data from
smartphones, and over 96% using data from tablets. Another conclusion that can
be extracted from Table 6 is that the data obtained from multitouch tasks get worse
results than the singletouch cases. The best multitouch scenario is obtained using
tablet’s data for both training and testing, with a 94.6% of accuracy, compared
with its singletouch counterpart that gets a 96.3%. This may be caused by the less
developed control of the left hand by right-handed people and vice versa. The
main reason for using the Sigma LogNormal model is that adults have a better
control of fine movements than children, what is translated to different values for
the model parameters.37
The cross-sensor scenarios get results not too far from the single-sensor sce-
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 17
Modeling the Complexity of Signature and Touch-Screen Biometrics using the Lognormality Principle17
narios. The results obtained using smartphone singletouch data for training, and
tablet singletouch data for testing (95.9% of accuracy) are quite similar to those
obtained using only tablet singletouch data (96.3% of accuracy). This fact makes
this type of systems very suitable for real applications due to its high independence
of the device used.
Due to the higher number of children in the database compared to adults, se-
lecting a percentage of the total users make the two scenarios unbalanced. Ex-
periments balancing the number of both classes in training and testing have been
made. The results show small variations around 1% of accuracy with respect to
the presented results.
Figure 8 shows histograms of the scores calculated in the classification pro-
cess. It can be seen that the scores from children and adults are visibly sepa-
rated into two different zones, making possible to obtain high accuracy rates (over
96%). There are also other zones where the scores distributions overlap. These
regions are the source of incorrect classifications. Combining scores from several
tasks of the same user could make possible to reduce the overlap areas, increasing
even more the accuracy rate.
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 18
18R. Vera-Rodriguez, R. Tolosana, J. Hernandez-Ortega, A. Acien, A. Morales, J. Fierrez and J. Ortega-Garcia
Fig. 8. Histograms of scores using the Sigma LogNormal model features. Left figure represents the
scores for single-sensor scenario, using tablet singletouch data for both training and testing. Right
figure shows the histogram for a cross-sensor scenario, using phone singletouch data for training and
tablet multitouch data for testing the classifier.
5. Conclusions
This work has reported experimental results on modeling the complexity of
biomechanical tasks through the usage of the Sigma LogNormal model of the
Kinematic Theory of rapid human movements. Two different case studies have
been analyzed.
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 19
Modeling the Complexity of Signature and Touch-Screen Biometrics using the Lognormality Principle19
The first case study has focused on applying the Sigma LogNormal model to
develop an on-line signature complexity detector. Just by using the number of
strokes of the signatures was enough to obtain very good results differentiating
between three different signature complexity groups (low, medium and high). As
a second stage, a specific signature verification system was developed for each
signature complexity group by carrying out a time functions selection process.
Very significant improvements of recognition performance have been shown when
comparing the proposed system with a baseline, being both based on DTW and
time functions as features. For future work, the approach considered in this work
will be further analysed using the e-BioSign public database38 in order to consider
new scenarios such as the case of using the finger as the writing tool. Novel sys-
tems based on the usage of Recurrent Neural Networks (RNNs)39 and the fusion
of different systems40 will be considered. Also, different types of presentation
attacks to signature recognition systems41 will be considered analysing how sig-
natures with different complexity levels are affected.
On the other hand, the second case study has focused on age group predic-
tion (children from adults) from handwritten touch patterns acquired from touch-
screen devices such as smartphones or tables. Applying the Sigma LogNormal
model to some examples of drag and drop tasks from children and adults showed
that children had a more complex velocity profiles with a larger number of sigma
lognormals. The proposed approach is based on 20 features extracted from the
model, and results achieved were very promising with classification rates over
96% in a single-sensor setting and over 95% in a cross-sensor scenario. Future
work includes the analysis of touchscreen data to continuously monitor the user
behaviour.42
Acknowledgements
This work has been supported by project TEC2015-70627-R MINECO/FEDER,
Bio-Guard (Ayudas Fundacin BBVA a Equipos de Investigacin Cientfica 2017)
and by UAM-CecaBank Project. Ruben Tolosana and Alejandro Acien are sup-
ported by a FPU Fellowship from Spanish MECD, and Javier Hernandez by a FPI
Fellowship from UAM.
References
1. R. Plamondon, G. Pirlo and D. Impedovo, Online Signature Verification. Springer
(D. Doermann and K. Tombre (Eds.), Handbook of Document Image Processing and
Recognition, Springer, pp. 917-947, 2014).
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 20
20R. Vera-Rodriguez, R. Tolosana, J. Hernandez-Ortega, A. Acien, A. Morales, J. Fierrez and J. Ortega-Garcia
2. R. Guest, Age Dependency in Handwritten Dynamic Signature Verification Systems,
Pattern Recognition Letters.27(10), 1098–1104 (2006). ISSN 0167-8655.
3. J. Fierrez, A. Pozo, M. Martinez-Diaz, J. Galbally, and A. Morales, Benchmarking
touchscreen biometrics for mobile authentication, IEEE Trans. on Information Foren-
sics and Security.13(11), 2720–2733 (November, 2018). doi: https://doi.org/10.1109/
TIFS.2018.2833042.
4. R. Tolosana, R. Vera-Rodriguez, J. Fierrez, and J. Ortega-Garcia. Incorporating touch
biometrics to mobile one-time passwords: Exploration of digits. In Proc. IEEE/CVF
Conference on Computer Vision and Pattern Recognition Workshops, CVPR-W (June,
2018).
5. C. O’Reilly and R. Plamondon, Development of a Sigma-Lognormal Representation
for On-Line Signatures, Pattern Recognition.42(12), 3324–3337 (2009).
6. J. Galbally, R. Plamondon, J. Fierrez, and J. Ortega-Garcia, Synthetic on-line signature
generation. part i: Methodology and algorithms, Pattern Recognition.45, 2610–2621
(July, 2012). doi: http://dx.doi.org/10.1016/j.patcog.2011.12.011.
7. M. Diaz, A. Fischer, M. A. Ferrer, and R. Plamondon, Dynamic signature verification
system based on one real signature, IEEE Transactions on Cybernetics.PP(99), 1–12
(2017). ISSN 2168-2267. doi: 10.1109/TCYB.2016.2630419.
8. M. Gomez-Barrero, J. Galbally, J. Fierrez, J. Ortega-Garcia and R. Plamondon, En-
hanced On-Line Signature Verification Based on Skilled Forgery Detection Using
Sigma-LogNormal Features, Proc. IEEE/IAPR Int. Conf. on Biometrics, ICB. pp. 501–
506 (2015).
9. A. Fischer and R. Plamondon, Signature verification based on the kinematic theory
of rapid human movements, IEEE Transactions on Human-Machine Systems.47(2),
169–180 (April, 2017). ISSN 2168-2291. doi: 10.1109/THMS.2016.2634922.
10. R. Tolosana, R. Vera-Rodriguez, R. Guest, J. Fierrez, and J. Ortega-Garcia.
Complexity-based biometric signature verification. In Proc. 14th IAPR Int. Confer-
ence on Document Analysis and Recognition, ICDAR (November, 2017).
11. J. Hernandez-Ortega, A. Morales, J. Fierrez, and A. Acien, Detecting age groups using
touch interaction based on neuromotor characteristics, IET Electronics Letters. pp. 1–2
(September, 2017). doi: http://dx.doi.org/10.1049/el.2017.0492.
12. J. Piaget and B. Inhelder, The psychology of the child. vol. 5001, Basic books (1969).
13. M. Djioua and R. Plamondon, A new algorithm and system for the characterization of
handwriting strokes with delta-lognormal parameters, IEEE Transactions on Pattern
Analysis and Machine Intelligence.31(11), 2060–2072 (2009).
14. T. Duval, C. R´
emi, R. Plamondon, J. Vaillant, and C. OReilly, Combining sigma-
lognormal modeling and classical features for analyzing graphomotor performances
in kindergarten children, Human Movement Science.43, 183–200 (2015).
15. J. Fierrez, J. Ortega-Garcia and J. Gonzalez-Rodriguez, Target Dependent Score Nor-
malization Techniques and Their Application to Signature Verification, IEEE Trans-
actions on Systems, Man, and Cybernetics. Part C.35(3), 418–425 (2005).
16. F. Alonso-Fernandez, M.C. Fairhurst, J. Fierrez and J. Ortega-Garcia, Impact of Sig-
nature Legibility and Signature Type in Off-Line Signature Verification, In Proc. IEEE
Biometrics Symposium. pp. 1–6 (2007).
17. M. Lim and P. Yuen, Entropy Measurement for Biometric Verification Systems, IEEE
Transactions on Cybernetics.46(5), 1065–1077 (2016).
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 21
Modeling the Complexity of Signature and Touch-Screen Biometrics using the Lognormality Principle21
18. Z.H. Zhou, Biometric Entropy. Encyclopedia of Biometrics, Springer (S.Z. Li and A.
Jain (Eds.), Encyclopedia of Biometrics, Springer, pp. 273-274, 2009).
19. N. Houmani, S. Garcia-Salicetti and B. Dorizzi, A Novel Personal Entropy Mea-
sure Confronted to Online Signature Verification Systems Performance, In Proc. Intl.
Conf.on Biometrics : Theory, Applications and System, BTAS. pp. 1–6 (2008).
20. N. Houmani and S. Garcia-Salicetti, On Hunting Animals of the Biometric Menagerie
for Online Signature, PLOS ONE.11(4), 1–26 (2016).
21. M. Martinez-Diaz, J. Fierrez and S. Hangai, Signature Features (S.Z. Li and A. Jain
(Eds.), Encyclopedia of Biometrics, Springer, pp. 1375-1382, 2015).
22. M. Martinez-Diaz, J. Fierrez, R. P. Krish, and J. Galbally, Mobile Signature Veri-
fication: Feature Robustness and Performance Comparison, IET Biometrics.3(4),
267–277 (2014).
23. M. Martinez-Diaz, J. Fierrez and S. Hangai, Signature Matching (S.Z. Li and A. Jain
(Eds.), Encyclopedia of Biometrics, Springer, pp. 1382-1387, 2015).
24. J. Fierrez, J. Galbally, J. Ortega-Garcia, et al., BiosecurID: A Multimodal Biometric
Database, Pattern Analysis and Applications.13(2), 235–246 (2010).
25. M. Martinez-Diaz, J. Fierrez, R.P. Krish and J. Galbally, Mobile Signature Verifica-
tion: Feature Robustness and Performance Comparison, IET Biometrics.3(4), 267–
277 (December, 2014).
26. Youtube. Youtube statistics. https://about.twitter.com/company Retrieved September
13, 2016.
27. BBA. Mobile phone apps become the UK’s
number one way to bank. https://www.bba.org.uk/news/press-releases/mobile-phone-
apps-become-the-uks-number-one-way-to-bank Retrieved September 13, 2016.
28. R. Daza-Garcia, J. Hernandez-Ortega, A. Morales, J. Fierrez, M. Gonzlez Barrero, and
J. Ortega-Garcia. KBOC: Plataforma de evaluacion de tecnologias de reconocimiento
biometrico basadas en dinamica de tecleo. In XXXI Simposium Nacional de la Union
Cientifica Internacional de Radio (2016).
29. H. K. Kabali, M. M. Irigoyen, R. Nunez-Davis, J. G. Budacki, S. H. Mohanty, K. P.
Leister, and R. L. Bonner, Exposure and use of mobile media devices by young chil-
dren, Pediatrics.136(6), 1044–1050 (2015).
30. B. Cassidy and L. McKnight, Children’s interaction with mobile touch-screen devices:
Experiences and guidelines for design, Int. J. Mob. Hum. Comput. Interact. 2(2),
1–18 (Apr., 2010). ISSN 1942-390X. doi: 10.4018/jmhci.2010040101. URL http:
//dx.doi.org/10.4018/jmhci.2010040101.
31. N. A. A. Aziz, F. Batmaz, R. Stone, and P. W. H. Chung. Selection of touch gestures
for children’s applications. In Science and Information Conference (SAI), 2013, pp.
721–726 (2013).
32. L. Anthony, Q. Brown, J. Nias, B. Tate, and S. Mohan. Interaction and recognition
challenges in interpreting children’s touch and gesture input on mobile devices. In
Proceedings of the 2012 ACM international conference on Interactive tabletops and
surfaces, pp. 225–234 (2012).
33. R.-D. Vatavu, L. Anthony, and Q. Brown. Child or adult? inferring smartphone users
age group from touch measurements alone. In Human-Computer Interaction, pp. 1–9
(2015).
34. C. O’Reilly and R. Plamondon, Development of a sigma–lognormal representation for
December 20, 2018 12:25 ws-rv9x6 Book Title BookChapterUAM˙v6
page 22
22R. Vera-Rodriguez, R. Tolosana, J. Hernandez-Ortega, A. Acien, A. Morales, J. Fierrez and J. Ortega-Garcia
on-line signatures, Pattern Recognition.42(12), 3324–3337 (2009).
35. A. Fischer and R. Plamondon. A dissimilarity measure for on-line signature verifica-
tion based on the sigma-lognormal model. In 17th Biennial Conference of the Interna-
tional Graphonomics Society (2015).
36. R. Plamondon, C. O’Reilly, C. R´
emi, and T. Duval, The lognormal handwriter: learn-
ing, performing, and declining, Frontiers in psychology.4, 945 (2013).
37. R.-D. Vatavu, G. Cramariuc, and D. M. Schipor, Touch interaction for children aged
3 to 6 years: Experimental findings and relationship to motor skills, International
Journal of Human-Computer Studies.74, 54–76 (2015).
38. R. Tolosana, R. Vera-Rodriguez, J. Fierrez, A. Morales, and J. Ortega-Garcia, Bench-
marking desktop and mobile handwriting across cots devices: the e-biosign biometric
database, PLOS ONE.5(12) (2017).
39. R. Tolosana, R. Vera-Rodriguez, J. Fierrez, and J. Ortega-Garcia, Exploring recurrent
neural networks for on-line handwritten signature biometrics, IEEE Access. pp. 1 – 11
(2018). doi: 10.1109/ACCESS.2018.2793966.
40. J. Fierrez, A. Morales, R. Vera-Rodriguez, and D. Camacho, Multiple classifiers in
biometrics. Part 2: Trends and challenges, Information Fusion.44, 103–112 (Novem-
ber, 2018). doi: https://doi.org/10.1016/j.inffus.2017.12.005.
41. R. Tolosana, R. Vera-Rodriguez, J. Fierrez, and J. Ortega-Garcia, Presentation At-
tacks in Signature Biometrics: Types and Introduction to Attack Detection, In eds.
J. f. S. Marcel, M.S. Nixon and N. Evans, Handbook of Biometric Anti-Spoofing (2nd
Edition). Springer (2018).
42. A. Acien, A. Morales, J. Fierrez, R. V. Rodriguez, and J. Hernandez-Ortega, Active
detection of age groups based on touch interaction, IET Biometrics.8(1), 101–108
(January, 2019). doi: http://dx.doi.org/10.1049/iet-bmt.2018.5003.
... The authors discussed how children's gesturing abilities and behaviors differ between age groups, and from adults. Vera-Rodriguez et al. [7] presented an automatic system able to detect children from adults with classification rates over 96%. This detection system is based on the combination of features based on neuromotor skills, task time, and accuracy. ...
... Remi et al. [18] studied the scribbling activities executed by children of 3-6 years. They considered the Sigma-Lognormal writing generation model [7], [31] to analyse the motor skills, concluding that there are significant differences in the model parameters between ages. Stylus has also been considered by Tabatabaey-Mashadi et al. [26] to analyse the correlation between the performance of polygonal shape drawing and the levels in handwriting performance. ...
... This section analyses quantitatively one of the many different potential applications of ChildCIdb. In particular, we focus on the popular task of children age group detection based on the interaction with mobile devices [5], [7], [8]. Due to the large volume of information captured in ChildCIdb, we focus in this section only on the analysis of the Test 6 (Drawing Test) based on the way children colour a tree. ...
Article
Full-text available
This article provides an overview of recent research in Child-Computer Interaction with mobile devices and describe our framework ChildCI intended for: i) overcoming the lack of large-scale publicly available databases in the area, ii) generating a better understanding of the cognitive and neuromotor development of children along time, contrary to most previous studies in the literature focused on a single-session acquisition, and iii) enabling new applications in e-Learning and e-Health through the acquisition of additional information such as the school grades and children’s disorders, among others. Our framework includes a new mobile application, specific data acquisition protocols, and a first release of the ChildCI dataset (ChildCIdb v1), which is planned to be extended yearly to enable longitudinal studies. In our framework children interact with a tablet device, using both a pen stylus and the finger, performing different tasks that require different levels of neuromotor and cognitive skills. ChildCIdb is the first database in the literature that comprises more than 400 children from 18 months to 8 years old, considering therefore the first three development stages ofthe Piaget’s theory. In addition, and as a demonstration of the potential of the ChildCI framework, we include experimental results for one of the many applications enabled by ChildCIdb: children age detection based on device interaction.
... This research line has many different potential applications, e.g., restrict the access to adult contents or services such as on-line shopping. In [9], the authors presented an automatic system able to detect children from adults with classification rates over 96%. This detection system is based on the combination of features based on neuromotor skills, task time, and accuracy. ...
... In [20], Remi et al. studied the scribbling activities executed by children of 3-6 years. They considered the Sigma-Lognormal writing generation model [9], [28] to analyse the motor skills, concluding that there are significant differences in the model parameters between ages. Stylus has also been considered in [26] to analyse the correlation between the performance of polygonal shape drawing and the levels in handwriting performance. ...
... This section analyses quantitatively one of the many different potential applications of ChildCIdb. In particular, we focus on the popular task of children age group detection based on the interaction with mobile devices [7], [9], [10], [33]. Due to the large volume of information captured in ChildCIdb, we focus in this section only on the analysis of the Test 6 (Drawing Test) based on the way children colour a tree. ...
Preprint
Full-text available
We overview recent research in Child-Computer Interaction and describe our framework ChildCI intended for: i) generating a better understanding of the cognitive and neuromotor development of children while interacting with mobile devices, and ii) enabling new applications in e-learning and e-health, among others. Our framework includes a new mobile application, specific data acquisition protocols, and a first release of the ChildCI dataset (ChildCIdb v1), which is planned to be extended yearly to enable longitudinal studies. In our framework children interact with a tablet device, using both a pen stylus and the finger, performing different tasks that require different levels of neuromotor and cognitive skills. ChildCIdb comprises more than 400 children from 18 months to 8 years old, considering therefore the first three development stages of the Piaget's theory. In addition, and as a demonstration of the potential of the ChildCI framework, we include experimental results for one of the many applications enabled by ChildCIdb: children age detection based on device interaction. Different machine learning approaches are evaluated, proposing a new set of 34 global features to automatically detect age groups, achieving accuracy results over 90% and interesting findings in terms of the type of features more useful for this task.
... Fig. 1 graphically summarises the design, acquisition devices, and writing tools considered in the DeepSignDB database. Its application extends from the improvement of signature verification systems via deep learning to many other potential research lines, e.g., studying: i) user-dependent effects, and development of userdependent methods in signature biometrics, and handwriting recognition at large [13], ii) the neuromotor processes involved in signature biometrics [14], and handwriting in general [15], iii) sensing factors in obtaining representative and clean handwriting and touch interaction signals [16], [17], iv) human-device interaction factors involving handwriting and touchscreen signals [9], and development of improved interaction methods [18], and v) population statistics around handwriting and touch interaction signals, and development of new methods aimed at recognising or serving particular population groups [19], [20]. ...
... For future work, we encourage the research community to use DeepSignDB database for several purposes: i) perform a fair comparison of novel approaches with the state of the art (we refer the reader to download the DeepSignDB 4 and follow the ICDAR 2021 Competition on On-Line Signature Verification, SVC 2021 5 ) ii) evaluate the limits of novel DL architectures, and iii) carry out a more exhaustive analysis of the challenging finger input scenario. In addition, DeepSignDB can be also very useful to study neuromotor aspects related to handwriting and touchscreen interaction [14] across population groups and age [19] for diverse applications like e-learning and e-health [1]. Finally, we plan to evaluate the usability and performance improvement of our proposed TA-RNN approach for other signature verification approaches based on the use of synthetic samples [52], [53], and for other behavioral biometric traits such as keystroke biometrics [54]. ...
Preprint
Full-text available
Deep learning has become a breathtaking technology in the last years, overcoming traditional handcrafted approaches and even humans for many different tasks. However, in some tasks, such as the verification of handwritten signatures, the amount of publicly available data is scarce, what makes difficult to test the real limits of deep learning. In addition to the lack of public data, it is not easy to evaluate the improvements of novel proposed approaches as different databases and experimental protocols are usually considered. The main contributions of this study are: i) we provide an in-depth analysis of state-of-the-art deep learning approaches for on-line signature verification, ii) we present and describe the new DeepSignDB on-line handwritten signature biometric public database, iii) we propose a standard experimental protocol and benchmark to be used for the research community in order to perform a fair comparison of novel approaches with the state of the art, and iv) we adapt and evaluate our recent deep learning approach named Time-Aligned Recurrent Neural Networks (TA-RNNs) for the task of on-line handwritten signature verification. This approach combines the potential of Dynamic Time Warping and Recurrent Neural Networks to train more robust systems against forgeries. Our proposed TA-RNN system outperforms the state of the art, achieving results even below 2.0% EER when considering skilled forgery impostors and just one training signature per user.
... Most of the remaining studies consider the lognormal and other realted distributions in a more general setting that includes not only keystroke dynamics but also touch-screen biometrics [27]. Going beyond authentication, [28] and [29] employ the sigma-lognormal model of rapid human movements to detect the age group of users based on their interaction with a touch screen, while [30] leverages different distributions to discriminate a human user from a bot. ...
Article
Full-text available
Keystroke dynamics is a soft biometric trait. Although the shape of the timing distributions in keystroke dynamics profiles is a central element for the accurate modeling of the behavioral patterns of the user, a simplified approach has been to presuppose normality. Careful consideration of the individual shapes for the timing models could lead to improvements in the error rates of current methods or possibly inspire new ones. The main objective of this study is to compare several heavy-tailed and positively skewed candidate distributions in order to rank them according to their merit for fitting timing histograms in keystroke dynamics profiles. Results are summarized in three ways: counting how many times each candidate distribution provides the best fit and ranking them in order of success, measuring average information content, and ranking candidate distributions according to the frequency of hypothesis rejection with an Anderson-Darling goodness of fit test. Seven distributions with two parameters and seven with three were evaluated against three publicly available free-text keystroke dynamics datasets. The results confirm the established use in the research community of the log-normal distribution, in its two- and three-parameter variations, as excellent choices for modeling the shape of timings histograms in keystroke dynamics profiles. However, the log-logistic distribution emerges as a clear winner among all two- and three-parameter candidates, consistently surpassing the log-normal and all the rest under the three evaluation criteria for both hold and flight times.
... As an example, Acien et al. [12] analysed the neuromotor patterns extracted from touch gestures to discriminate between children and adults, in order to adapt the content showed in the smartphone to the user age. In [13], the authors model the complexity of online signatures over smartphone touchscreens using the neuromotor patterns associated to touch gestures. ...
Preprint
In this paper we list the sensors commonly available in modern smartphones and provide a general outlook of the different ways these sensors can be used for modeling the interaction between human and smartphones. We then provide a taxonomy of applications that can exploit the signals originated by these sensors in three different dimensions, depending on the main information content embedded in the signals exploited in the application: neuromotor skills, cognitive functions, and behaviors/routines. We then summarize a representative selection of existing research datasets in this area, with special focus on applications related to user authentication, including key features and a selection of the main research results obtained on them so far. Then, we perform the experimental work using the HuMIdb database (Human Mobile Interaction database), a novel multimodal mobile database that includes 14 mobile sensors captured from 600 participants. We evaluate a biometric authentication system based on simple linear touch gestures using a Siamese Neural Network architecture. Very promising results are achieved with accuracies up to 87% for person authentication based on a simple and fast touch gesture.
... As an example, Acien et al. [12] analysed the neuromotor patterns extracted from touch gestures to discriminate between children and adults, in order to adapt the content showed in the smartphone to the user age. In [13], the authors model the complexity of online signatures over smartphone touchscreens using the neuromotor patterns associated to touch gestures. ...
Conference Paper
Full-text available
In this paper we list the sensors commonly available in modern smartphones and provide a general outlook of the different ways these sensors can be used for modeling the interaction between human and smartphones. We then provide a taxonomy of applications that can exploit the signals originated by these sensors in three different dimensions, depending on the main information content embedded in the signals exploited in the application: neuromotor skills, cognitive functions, and behaviors/routines. We then summarize a representative selection of existing research datasets in this area, with special focus on applications related to user authentication, including key features and a selection of the main research results obtained on them so far. Then, we perform the experimental work using the HuMIdb database (Human Mobile Interaction database), a novel multimodal mobile database that includes 14 mobile sensors captured from 600 participants. We evaluate a biometric authentication system based on simple linear touch gestures using a Siamese Neural Network architecture. Very promising results are achieved with accuracies up to 87% for person authentication based on a simple and fast touch gesture.
... Additionally, we added the number of lognormals N that each mouse trajectory generates as an additional feature. This additional feature measures the complexity of the trajectory [31], having many lognormals means that the mouse trajectory has many changes in the velocity profile while few of them usually indicates more basic trajectories. ...
Article
Full-text available
We first study the suitability of behavioral biometrics to distinguish between computers and humans, commonly named as bot detection. We then present BeCAPTCHA-Mouse, a bot detector based on neuromotor modeling of mouse dynamics that enhances traditional CAPTCHA methods. Our proposed bot detector is trained using both human and bot data generated by two new methods developed for generating realistic synthetic mouse trajectories: i) a knowledge-based method based on heuristic functions, and ii) a data-driven method based on Generative Adversarial Networks (GANs) in which a Generator synthesizes human-like trajectories from a Gaussian noise input. Experiments are conducted on a new testbed also introduced here and available in GitHub: BeCAPTCHA-Mouse Benchmark; useful for research in bot detection and other mouse-based HCI applications. Our benchmark data consists of 10,000 mouse trajectories including real data from 58 users and bot data with various levels of realism. Our experiments show that BeCAPTCHA-Mouse is able to detect bot trajectories of high realism with 93% of accuracy in average using only one mouse trajectory. When our approach is fused with state-of-the-art mouse dynamic features, the bot detection accuracy increases relatively by more than 36%, proving that mouse-based bot detection is a fast, easy, and reliable tool to complement traditional CAPTCHA systems.
... Additionally, we added the number of lognormals N that each mouse trajectory generates as an additional feature. This additional feature measures the complexity of the trajectory [31], having many lognormals means that the mouse trajectory has many changes in the velocity profile while few of them usually indicates more basic trajectories. ...
Preprint
We first study the suitability of behavioral biometrics to distinguish between computers and humans, commonly named as bot detection. We then present BeCAPTCHA-Mouse, a bot detector based on neuromotor modeling of mouse dynamics that enhances traditional CAPTCHA methods. Our proposed bot detector is trained using both human and bot data generated by two new methods developed for generating realistic synthetic mouse trajectories: i) a knowledge-based method based on heuristic functions, and ii) a data-driven method based on Generative Adversarial Networks (GANs) in which a Generator synthesizes human-like trajectories from a Gaussian noise input. Experiments are conducted on a new testbed also introduced here and available in GitHub: BeCAPTCHA-Mouse Benchmark; useful for research in bot detection and other mouse-based HCI applications. Our benchmark data consists of 10,000 mouse trajectories including real data from 58 users and bot data with various levels of realism. Our experiments show that BeCAPTCHA-Mouse is able to detect bot trajectories of high realism with 93% of accuracy in average using only one mouse trajectory. When our approach is fused with state-of-the-art mouse dynamic features, the bot detection accuracy increases relatively by more than 36%, proving that mouse-based bot detection is a fast, easy, and reliable tool to complement traditional CAPTCHA systems.
... • We demonstrate the application of TA-RNNs for other time sequence tasks, i.e., on-line handwritten signature verification, outperforming in large margin the state of the art as well. MobileTouchDB can be also useful for other research lines, e.g.: i) user-dependent effects [12], and development of user-dependent methods for handwriting recognition [13], ii) the neuromotor processes involved in writing over touchscreens [14], [15], iii) sensing factors in obtaining representative and clean touch interaction signals [16], [17], iv) humandevice interaction factors involving touchscreen signals [18], [19], and development of improved interaction methods, and v) population statistics around touch interaction signals, and development of new methods aimed at recognising or serving particular population groups [20]. ...
Article
Full-text available
Passwords are still used on a daily basis for all kind of applications. However, they are not secure enough by themselves in many cases. This work enhances password scenarios through two-factor authentication asking the users to draw each character of the password instead of typing them as usual. The main contributions of this study are as follows: i) We present the novel MobileTouchDB public database, acquired in an unsupervised mobile scenario with no restrictions in terms of position, posture, and devices. This database contains more than 64K on-line character samples performed by 217 users, with 94 different smartphone models, and up to 6 acquisition sessions. ii) We perform a complete analysis of the proposed approach considering both traditional authentication systems such as Dynamic Time Warping (DTW) and novel approaches based on Recurrent Neural Networks (RNNs). In addition, we present a novel approach named Time-Aligned Recurrent Neural Networks (TA-RNNs). This approach combines the potential of DTW and RNNs to train more robust systems against attacks. A complete analysis of the proposed approach is carried out using both MobileTouchDB and e-BioDigitDB2 databases. Our proposed TA-RNN system outperforms the state of the art, achieving a final 2.38% Equal Error Rate, using just a 4- digit password and one training sample per character. These results encourage the deployment of our proposed approach in comparison with traditional typed-based password systems where the attack would have 100% success rate under the same impostor scenario.
... • We demonstrate the application of TA-RNNs for other time sequence tasks, i.e., on-line handwritten signature verification, outperforming in large margin the state of the art as well. MobileTouchDB can be also useful for other research lines, e.g.: i) user-dependent effects [12], and development of user-dependent methods for handwriting recognition [13], ii) the neuromotor processes involved in writing over touchscreens [14], [15], iii) sensing factors in obtaining representative and clean touch interaction signals [16], [17], iv) humandevice interaction factors involving touchscreen signals [18], [19], and development of improved interaction methods, and v) population statistics around touch interaction signals, and development of new methods aimed at recognising or serving particular population groups [20]. ...
Preprint
Full-text available
Passwords are still used on a daily basis for all kind of applications. However, they are not secure enough by themselves in many cases. This work enhances password scenarios through two-factor authentication asking the users to draw each character of the password instead of typing them as usual. The main contributions of this study are as follows: i) We present the novel MobileTouchDB public database, acquired in an unsupervised mobile scenario with no restrictions in terms of position, posture, and devices. This database contains more than 64K on-line character samples performed by 217 users, with 94 different smartphone models, and up to 6 acquisition sessions. ii) We perform a complete analysis of the proposed approach considering both traditional authentication systems such as Dynamic Time Warping (DTW) and novel approaches based on Recurrent Neural Networks (RNNs). In addition, we present a novel approach named Time-Aligned Recurrent Neural Networks (TA-RNNs). This approach combines the potential of DTW and RNNs to train more robust systems against attacks. A complete analysis of the proposed approach is carried out using both MobileTouchDB and e-BioDigitDB databases. Our proposed TA-RNN system outperforms the state of the art, achieving a final 2.38% Equal Error Rate, using just a 4-digit password and one training sample per character. These results encourage the deployment of our proposed approach in comparison with traditional typed-based password systems where the attack would have 100% success rate under the same impostor scenario.
Chapter
Full-text available
Authentication applications based on the use of biometric methods have received a lot of interest during the last years due to the breathtaking results obtained using personal traits such as face or fingerprint. However, it is important not to forget that these biometric systems have to withstand different types of possible attacks. This work carries out an analysis of different Presentation Attack (PA) scenarios for on-line handwritten signature verification. The main contributions of the present work are: (1) short overview of representative methods for Presentation Attack Detection (PAD) in signature biometrics; (2) to describe the different levels of PAs existing in on-line signature verification regarding the amount of information available to the attacker, as well as the training, effort and ability to perform the forgeries; and (3) to report an evaluation of the system performance in signature biometrics under different PAs and writing tools considering freely available signature databases. Results obtained for both BiosecurID and e-BioSign databases show the high impact on the system performance regarding not only the level of information that the attacker has but also the training and effort performing the signature. This work is in line with recent efforts in the Common Criteria standardization community towards security evaluation of biometric systems, where attacks are rated depending on, among other factors, time spent, effort and expertise of the attacker, as well as the information available and used from the target being attacked.
Article
Full-text available
This article studies user classification into children and adults according to their interaction with touchscreen devices. The authors analyse the performance of two sets of features derived from the sigma-lognormal theory of rapid human movements and global characterisation of touchscreen interaction. The authors propose an active detection approach aimed to continuously monitor the user patterns. The experimentation is conducted on a publicly available database with samples obtained from 89 children between 3 and 6 years old and 30 adults. The authors have used support vector machines algorithm to classify the resulting features into age groups. The sets of features are fused at the score level using data from smartphones and tablets. The results, with correct classification rates over 96%, show the discriminative ability of the proposed neuromotorinspired features to classify age groups according to the interaction with touch devices. In the active detection set-up, the authors' method is able to identify a child using only four gestures in average.
Conference Paper
Full-text available
This work evaluates the advantages and potential of incorporating touch biometrics to mobile one-time passwords (OTP). The new e-BioDigit database, which comprises on-line handwritten numerical digits from 0 to 9, has been acquired using the finger touch as input to a mobile device. This database is used in the experiments reported in this work and it is publicly available to the research community. An analysis of the OTP scenario using handwritten digits is carried out regarding which are the most discrimi-native handwritten digits and how robust the system is when increasing the number of them in the user password. Additionally , the best features for each handwritten numerical digit are studied in order to enhance our proposed biomet-ric system. Our proposed approach achieves remarkable results with EERs ca. 5.0% when using skilled forgeries, outperforming other traditional biometric verification traits such as the handwritten signature or graphical passwords on similar mobile scenarios.
Article
Full-text available
Systems based on deep neural networks have made a breakthrough in many different pattern recognition tasks. However, the use of these systems with traditional architectures seems not to work properly when the amount of training data is scarce. This is the case of the on-line signature verification task. In this work we propose novel writer-independent online signature verification systems based on Recurrent Neural Networks (RNNs) with a Siamese architecture whose goal is to learn a dissimilarity metric from pairs of signatures. To the best of our knowledge this is the first time these recurrent Siamese networks are applied to the field of on-line signature verification, which provides our main motivation. We propose both Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) systems with a Siamese architecture. In addition, a bidirectional scheme (which is able to access both past and future context) is considered for both LSTM- and GRU-based systems. An exhaustive analysis of the system performance and also the time consumed during the training process for each recurrent Siamese network is carried out in order to compare the advantages and disadvantages for practical applications. For the experimental work we use the BiosecurID database comprised of 400 users who contributed a total of 11,200 signatures in 4 separated acquisition sessions. Results achieved using our proposed recurrent Siamese networks have outperformed state-of-the-art on-line signature verification systems using the same database.
Article
Full-text available
The present paper is Part 2 in this series of two papers. In Part 1 we provided an introduction to Multiple Classifier Systems (MCS) with a focus into the fundamentals: basic nomenclature, key elements, architecture, main methods, and prevalent theory and framework. Part 1 then overviewed the application of MCS to the particular field of multimodal biometric person authentication in the last 25 years, as a prototypical area in which MCS has resulted in important achievements. Here in Part 2 we present in more technical detail recent trends and developments in MCS coming from multimodal biometrics that incorporate context information in an adaptive way. These new MCS architectures exploit input quality measures and pattern-specific particularities that move apart from general population statistics, resulting in robust multimodal biometric systems. Similarly as in Part 1, methods here are described in a general way so they can be applied to other information fusion problems as well. Finally, we also discuss here open challenges in biometrics in which MCS can play a key role.
Conference Paper
Full-text available
On-line signature verification systems are mainly based on two approaches: feature- or time functions-based systems (a.k.a. global and local systems). However, new sources of information can be also considered in order to complement these traditional approaches, reduce the intra-class variability and achieve more robust signature verification systems against forgers. In this paper we focus on the use of the concept of complexity in on-line signature verification systems. The main contributions of the present work are: 1) classification of users according to the complexity level of their signatures using features extracted from the Sigma LogNormal writing generation model, and 2) a new architecture for signature verification exploiting signature complexity that results in highly improved performance. Our proposed approach is tested considering the BiosecurID on-line signature database with a total of 400 users. Results of 5.8% FRR for a FAR = 5.0% have been achieved against skilled forgeries outperforming recent related works. In addition, an analysis of the optimal time functions for each complexity level is performed providing practical insights for the application of signature verification in real scenarios.
Article
Full-text available
The dynamic signature is a biometric trait widely used and accepted for verifying a person's identity. Current automatic signature-based biometric systems typically require five, ten or even more specimens of a person's signature to learn intra-personal variability sufficient to provide an accurate verification of the individual's identity. To mitigate this drawback, this paper proposes a procedure for training with only a single reference signature. Our strategy consists of duplicating the given signature a number of times and training an automatic signature verifier with each of the resulting signatures. The duplication scheme is based on a sigma lognormal decomposition of the reference signature. Two methods are presented to create human-like duplicated signatures: the first varies the strokes' lognormal parameters (stroke-wise) whereas the second modifies their virtual target points (target-wise). A challenging benchmark, assessed with multiple state-of-the-art automatic signature verifiers and multiple databases, proves the robustness of the system. Experimental results suggest that our system, with a single reference signature, is capable of achieving a similar performance to standard verifiers trained with up to five signature specimens.
Article
We study user interaction with touchscreens based on swipe gestures for personal authentication. This approach has been analyzed only recently in the last few years in a series of disconnected and limited works. We summarize those recent efforts, and then compare them to three new systems (based on SVM and GMM using selected features from the literature) exploiting independent processing of the swipes according to their orientation. For the analysis, four public databases consisting of touch data obtained from gestures sliding one finger on the screen are used. We first analyze the contents of the databases, observing various behavioral patterns, e.g., horizontal swipes are faster than vertical independently of the device orientation. We then explore both an intra-session scenario where users are enrolled and authenticated within the same day; and an inter-session one, where enrollment and test are performed on different days. The resulting benchmarks and processed data are made public, allowing the reproducibility of the key results obtained based on the provided score files and scripts. In addition to remarkable performance thanks to the proposed orientation-based conditional processing, the results show various new insights into the distinctiveness of swipe interaction, e.g.: some gestures hold more user-discriminant information, data from landscape orientation is more stable, and horizontal gestures are more discriminative in general than vertical ones. IEEE
Article
A new parental control method to prevent unauthorised usage of touch devices by kids is proposed. The impact of rapidly advancing technology on the developing child has seen an increase exposition to new forms of danger. Studies reveal that 97% of US children under the age of four use mobile devices. A reliable and efficient method to prevent the use of touch devices by preschool children is proposed. The proposed method is based on the analysis of the neuromotor characteristics of the users according to the decomposition of simple drag and drop tasks using the kinematic theory of rapid human movements. The experimentation is conducted on a publicly available database with samples obtained from 89 children between 3 and 6 years old and 30 adults. The results are compared with an existent system based only on task time and accuracy. Finally, both systems are combined at score level to achieve better performances. The results, with correct classification rates over 96% in the combined system, show the discriminative ability of the proposed neuromotor-inspired features and the possibility of combining this system with others to improve their final performance.
Article
When using tablet computers, smartphones, or digital pens, human users perform movements with a stylus or their fingers that can be analyzed by the kinematic theory of rapid human movements. In this paper, we present a user-centered system for signature verification that performs such a kinematic analysis to verify the identity of the user. It is one of the first systems that is based on a direct comparison of the elementary neuromuscular strokes which are detected in the handwriting. Taking into account the number of strokes, their similarity, and their timing, the string edit distance is employed to derive a dissimilarity measure for signature verification. On several benchmark datasets, we demonstrate that this neuromuscular analysis is complementary to a well-established verification using dynamic time warping. By combining both approaches, our verifier is able to outperform current state-of-the-art results in on-line signature verification.