ArticlePDF Available

Abstract and Figures

Many body parts, personal characteristics and signaling methods have recently been suggested and used for biometrics systems: fingers, hands, feet, faces, eyes, ears, teeth, veins, voices, signatures, typing styles and gaits. Acontinuously increasing number of biometric techniques have risen in order to fulfill the different kinds of demands in the market. Every method presents a number of advantages compared to the others as each technique has been created to subserve different kinds of requirements. However, there is still no method able to completely satisfy the current security needs. This is the reason why researchers continuously drive their efforts to newer methods that will provide a higher security stage. In this paper, the emerging biometric modalities are presented.
Content may be subject to copyright.
J Multimodal User Interfaces (2008) 2: 217–235
DOI 10.1007/s12193-009-0020-x
ARTICLE
Emerging biometric modalities: a survey
Georgios Goudelis ·Anastasios Tefas ·Ioannis Pitas
Received: 20 February 2009 / Accepted: 12 September 2009 / Published online: 29 September 2009
© OpenInterface Association 2009
Abstract Many body parts, personal characteristics and
signaling methods have recently been suggested and used
for biometrics systems: fingers, hands, feet, faces, eyes,
ears, teeth, veins, voices, signatures, typing styles and gaits.
A continuously increasing number of biometric techniques
have risen in order to fulfill the different kinds of demands
in the market. Every method presents a number of advan-
tages compared to the others as each technique has been
created to subserve different kinds of requirements. How-
ever, there is still no method able to completely satisfy the
current security needs. This is the reason why researchers
continuously drive their efforts to newer methods that will
provide a higher security stage. In this paper, the emerging
biometric modalities are presented.
Keywords Biometrics ·Emerging biometrics ·Human
recognition/verification
1 Introduction
Biometric recognition of people is a pioneering and evolv-
ing research area that aims to fulfil the human need for
security. The term biometrics recognition of people refers
to automatic security systems that rely on physical or be-
havioural human characteristics. In the beginning of the
last decade, biometrics were considered as the most con-
fident solution for the development of future security sys-
tems. Many body parts, personal characteristics and imag-
ing methods have been suggested and used for biometrics
G. Goudelis ()·A. Tefas ·I. Pitas
Dept. of Informatics, Aristotle University of Thessaloniki,
Thessaloniki, Greece
e-mail: goudelis@aiia.csd.auth.gr
systems: fingers, hands, feet, faces, eyes, ears, voices, sig-
natures, typing styles and gaits.
The problem of automatic person recognition/verification
for security applications is eventually the one that attracted
the interest of the research community. On the one hand,
person recognition refers to the problem of recognizing the
identity of a test person (using one or more of its biometric
characteristics) by selecting the most similar (best match)
or the Nmost similar persons from a given database [1,2].
Usually, these systems are supported by a human expert that
takes the final decision for the identity of the test person. On
the other hand, person verification refers to the automatic
acceptance or rejection of an identity claim. That is, a test
person claims the identity of a person that is included in the
system database and the system has to decide either to ac-
cept the claim or not. The problem of person verification
is the one that has attracted the interest of many research
groups and companies in the last years and stimulated the
development of many verification techniques and biometric
systems using several modalities [3].
Face recognition/verification is considered as one of the
most attractive biometric applications and has received sig-
nificant attention [46]. The problem of machine recogni-
tion of human faces continues to attract researchers from
disciplines such as image processing, pattern recognition,
neural networks, computer vision, computer graphics, and
psychology. Although a large number of algorithms and
different applications have been proposed, face recogni-
tion/verification remains an active subject of research. Its ul-
timate efficiency is still an unsolved issue which depends on
many factors like the recording conditions, the method and
the image database used [7,8].
Speaker recognition is another modality that has been
under research for many years, [9]. Voice biometrics use
the information contained in the speech stream to perform
218 J Multimodal User Interfaces (2008) 2: 217–235
identification. They usually benefit from using good micro-
phones and noise cancellation techniques but are vulnerable
to conditions that affect the performance of these systems:
background and channel noise, variable and inferior micro-
phones and telephones, extreme hoarseness, fatigue, or vo-
cal stress [10]. However, there are several levels of informa-
tion in speech that are not affected by these conditions such
as “word usage” [11].
Probably the most common known biometric is finger-
prints. Fingerprint technologies are mostly based on the
analysis of two-dimensional maps of fingerprints produced
by a number of different sensor types. During the processing
stage, the ridge patterns on the fingertip are often reduced to
a digital representation for efficient storage. These technolo-
gies are practical and easy to implement but performance
measures vary widely and are affected by many factors as
dryness, dirt or ageing [12,13].
As already mentioned, the number of the proposed bio-
metrics is large and many review articles have been pub-
lished, analyzing the advantages and disadvantages of each
of these well-known methods. However, it is important to
note that even though current machine recognition systems
have reached a certain level of maturity, their success is
limited by the conditions imposed by many real applica-
tions [5]. Besides effectiveness, the availability and the af-
fordability of biometric technologies appear to be important
requirements for biometric systems.
The need for security in every day life is continuously
increasing and the various possible demands require dif-
ferent approaches. Since the classical biometric modalities
are not able to supply the needs of every possible security
requirement, numerous emerging biometric modalities are
presented, trying to fill the gap. In this paper we will intro-
duce the emerging technologies on biometrics.
At this point we should mention, that the scope of this
paper is not to provide an extensive review of the typi-
cal biometric solutions such as iris, fingerprint, face, voice,
gait, retina and signature, but only to concentrate to emerg-
ing biometric modalities. Moreover, we should note that
nowhere in this paper is claimed that any of the emerging
should have better performance than any of the well studied
modalities. All of the presented methods have just emerged
and it is obvious that time is required until these methods
are truly evaluated. The rest of the paper is organized as fol-
lows. In Sect. 2, we briefly describe the emerging biometric
techniques. Conclusions are drawn in Sect. 3, summarizing
the presented developments.
2 Emerging biometric modalities
2.1 Gait
Although gait has been proposed as a biometric solution
over a decade ago, it is still seen as a future biometric
[14,15]. Psychological studies have demonstrated that it is
possible to recognize people by the way they walk. Recently,
great attention has been given on how machine vision sys-
tems are able to take advantage of gait’s individuality and
support biometric applications. Gait as a biometric is exam-
ined for many years and many methods have been proposed.
Since the number of publications concerning the specific
modality is quite large, the most representative and recent
advances are presented here.
Boyd and Little in [16] define gait to be “the coordinated
cyclic combination of movements that result in human loco-
motion”. The movements are coordinated in the sense that
they must occur with a specific temporal pattern for the gait
to occur. The set of movements that consist a foul gait cy-
cle repeat in every cycle. The periodicity of the these move-
ments as well as the coordinated and cyclic motion of gait
makes it a unique phenomenon. The basic data types used in
gait and motion analysis systems are: background substrac-
tion, silhouettes, optical flow and motion energy/history im-
ages. There is a variety of methods that are used for gait
recognition and according to [16], they categorized by their
source of oscillations: shape, joint trajectory, self similarity,
and pixel.
An example of a system using joint trajectories is given
in [17]. The method extracts a hip joint trajectory from a se-
quence of images. Subsequently, recognition is performed
based on the Fourier components of the trajectory. The
method is tested on a database of 10 yields recognition rates
of 80% and 100% for Fourier features, and phase-weighted
Fourier features respectively. Accordingly, a self similarity
based method in [18], exploits this self similarity to create a
representation of gait sequences that is useful for gait recog-
nition. Researchers construct a self-similarity image from
the image sequence, in which pixel intensities indicate the
extent to which two images in the sequence are alike, i.e.,
pixel (i, j ) in the self-similarity image indicates the similar-
ity of the two images at times tiand tjj.
A system based on pixel oscillation in [19], demonstrates
how the frequency of the gait and the timing of the compo-
nent motions, determine the frequency and phase of the pixel
oscillations. More specifically, authors demonstrated that an
array of phase-locked loops (PLL), one per pixel, can syn-
chronize internal oscillators to the frequency and phase of
pixel oscillations. This synchronization process inherently
performs frequency entrainment and phase locking. Boyd
uses a phasor, a complex number that represents a rotating
vector, to represent the magnitude and phase of the oscilla-
tions at each pixel. Thus, once the PLL synchronization oc-
curs, one can construct a complex image of phasors in which
each pixel indicates the extent to which there are oscillations
and the relative timing of the oscillations.
A more recent technology [20], uses markerless gait
analysis. the method is based on the anthropometric pro-
portions of human limps and the characteristics of the gait
J Multimodal User Interfaces (2008) 2: 217–235 219
task. The system uses a single camera, does not require cam-
era calibration and works with a wide range of directions of
walking. The properties of the method give advantages to
it, as according to authors it overcomes marker technology
and makes a possible commercial product unobtrusive. The
proposed gait analysis is based on two consecutive steps:
a motion estimation method which extracts the limb’s ori-
entations with respect to the image reference system and
a view-point independent gait reconstruction algorithm that
normalizes and corrects the limbs inclinations in the lateral
reference system. For the experiments 200 video sequences
3 subjects viewed at 6 different camera inclinations have
been used. The results as illustrated, indicate that are com-
parable to the results obtained by reflective marker based
techniques encouraging for real application scenarios.
Another recent study in gait identification [21], examines
the effects of covariation on the recognition process. Au-
thors show how these factors can separately affect the walk-
ing pattern. Further, they assess the contribution and dis-
criminatory significance of the gait dynamics used for recog-
nition. On a database of 440 samples, a recognition rate
of 73.4% was achieved using a k-nearest neighbor (KNN)
classifier. Authors argue, that the results confirm that person
identification using dynamic gait features is still perceivable
with better recognition rate even under the different covari-
ate factors.
Gait presents advantages compared to other biometric
modalities such as iris or fingerprints. Its main advantage
is that it is effective at a distance or where only low resolu-
tion images/video is available (e.g. CCTV cameras). How-
ever, there are many factors that can negatively influence the
accuracy of a gait recognition system. The speed at which
someone walks or runs has little effect on the biometric, but
wearing a trench coat can mask the feet, and using flip-flops
can also affect the results. With respect to gait security, stud-
ies also indicated that gait biometric is robust against min-
imal effort impersonation attacks. However, impostors who
know their closest person in the database or the gender of
the users in the database can be a threat to a gait authentica-
tion system. Although gait is a subject of research for many
years, it is still not suggested as a stand alone application
and it is usually proposed for multi-modal biometrics where
it is supposed that increases the overall performance of the
system.
2.2 Thermogram
Conventional video cameras sensors reflect light, so that im-
age intensities depend on both intrinsic skin reflectivity and
external incident illumination, thus obfuscating the intrin-
sic reflectivity of skin. Thermal emission from the skin, on
the other hand, is an intrinsic measurement that can be iso-
lated from external illumination, under normal conditions.
Fig. 1 Sample mages from Equinox database [22]
Researchers have found that a unique heat distribution pat-
tern can be obtained from the human face. This pattern can
be seen by acquiring still images using infrared cameras.
The different densities of bone, skin, fat and blood ves-
sels all contribute to an individual’s personal “heat signa-
ture”. Example of a database containing thermal images is
the Equinox database [22]. Equinox database is a collection
of face imagery, in the following modalities: coregistered
broadband-visible/longwave infrared (8–12 microns), mid-
wave infrared (3–5 microns), shortwave infrared (0.9–1.7
microns). A few samples taken from the database are shown
in Fig. 1[22].
Nine different comparative thermogram parameters are
used excluding the nose and ears, which are prone to wide
variations in temperature [23]. Once an image of a face
is taken, its thermal image can be matched with accuracy
against a database of pre-recorded thermographs. The al-
gorithm is based on Monte Carlo analysis of performance
measures. This analysis reveals that under many circum-
stances, using thermal infrared imagery yields higher per-
formance, while in other cases performance in both modali-
ties is equivalent. Performance increases further when algo-
rithms on visible and thermal infrared imagery are fused.
A study in [23] examines the invariance of Long-Wave
Infrared (LWIR) imagery with respect to different illumi-
nation conditions from the viewpoint of performance com-
parisons of two face recognition algorithms (eigenfaces [24]
and Arena [25], respectively) applied to LWIR and visi-
ble imagery. A rigorous data collection protocol has been
developed that formalizes the meaning of thermal IR in
220 J Multimodal User Interfaces (2008) 2: 217–235
face recognition analysis. The experimental procedure per-
formed on a database of prerecorded infrared videos of
91 subjects. The classification performance for ARENA on
LWIR imagery reported to be up to 99% while the minimum
score achieved was 97%. The minimum score reported for
the case where the training set comprised by frames rep-
resenting different expressions and faces with glasses. The
performance of eigenfaces on LWIR imagery, was 96% and
87% for the same training sets used for Arena algorithm re-
spectively.
A comprehensive performance study of multiple appear-
ance-based face recognition methodologies, on visible and
thermal images is presented in [26]. This analysis (based
on Monte Carlo analysis) reveals that, under many circum-
stances, the use of thermal infrared images yields better per-
formance, while in other cases, performance in both modal-
ities is similar. Recognition performance increases further,
when algorithms applied to visible and thermal infrared im-
ages are combined. The matching is achieved by the use of
a Bayesian classifier. The experiments where performed on
Equinox database while the higher matching rate produced
reported to be equal to 89,6%.
In [27,28], a two stage face recognition method based on
infrared images and statistical modelling of visible images is
presented, aiming to decrease the error caused by the pres-
ence of eyeglasses. An enhanced approach is proposed by
applying Bessel modelling on the facial region only, rather
than on the entire image and by pipelining a classification
algorithm to produce a unique solution. Although both ap-
proaches managed to improve the performance presented by
the single IR methods, they were not able to fully discount il-
lumination effects present in the visible (not IR) images. The
experimental results though, according to the authors, show
substantial improvements in the overall recognition perfor-
mance.
The most recent advance on thermal IR is outlined in [29]
where the novelty of the approach is the use of characteris-
tic and time-invariant physiological information to construct
the feature space. The motivation behind this effort is to con-
centrate on the permanency of innate characteristics that are
under the skin. The researchers support that although ther-
mal facial maps shift over time, the contrast between the su-
perficial vasculature and surrounding tissue remains invari-
ant. This physiological feature has permanence and is very
difficult to be altered as it is found under the skin. Therefore,
it gives a potent advantage to any face recognition method
that may use it. The method uses a novel Bayesian seg-
mentation algorithm to separate the facial tissue from the
background. In following, it extracts the vascular contour
network from the surface of the skin by using white top
hat segmentation preceded by anisotropic diffusion. Ther-
mal Minutia Points (TMPs) are localized in order to create
a feature vector. Finally, recognition is performed by match-
ing TMP-based feature vectors. Tests with 500 thermal faces
from 50 subjects show an eer of 6.
One of the obvious advantages of systems using ther-
mal images is the ability to operate in complete darkness,
which makes them ideal for covert surveillance. Thermo-
grams also offer robustness over certain kinds of disguises.
The structures that are imaged are beneath the skin and this
makes their alteration almost impossible. They are also ro-
bust to aging and unaffected by traumatic epidermic acci-
dents. However, they have other limitations, including the
fact that glasses are opaque to IR radiation. The presence of
glasses and thick facial hair, as well as substantial perspira-
tion, which may be the result of exertion of heat are major
problems that considerably affect the results [30].
2.3 Near infrared images
Near-infrared (NIR) images obtained from hyperspectral
cameras provide useful discriminant information for human
face recognition that cannot be obtained by other imaging
methods [31,32]. The use of near-infrared hyperspectral im-
ages for face recognition, over a database of 200 subjects, is
examined in the above referenced works. More specifically,
a face recognition algorithm is described that exploits the
spectral measurements for multiple facial tissue types. The
images were collected using a CCD camera equipped with
a liquid crystal tunable filter to provide 31 bands over the
near-infrared (0.7–1.0 µm) as shown in Fig. 2.
Spectral measurements over the near-infrared spectrum
allow the sensing of subsurface tissue structure which is sig-
nificantly different from person to person, but relatively sta-
ble over time while the provided facial features are some-
what illumination invariant. The experimental results show
that the local spectral properties of human tissue are nearly
invariant to face orientation and expression which allows
hyperspectral information to be used for recognition over a
large range of poses and expressions.
In [31], it is experimentally demonstrated that this al-
gorithm can be used to recognize faces over time in the
presence of changes in facial pose and expression. The au-
thors claim, that the algorithm performs significantly better
than the current face recognition systems for identifying ro-
tated faces. Performance might be further improved by mod-
elling the spectral reflectance changes due to face orientation
changes. As an extension of their previous work researchers
in [33], present results on recognizing 200 human subjects
under unknown outdoor illumination in hyperspectral face
images. For each subject, several NIR images with differ-
ent facial expressions and face orientations were acquired
on different days under various natural illumination condi-
tions. A set of 7258 global spectral irradiance functions were
used to synthesize reflected radiance images of each sub-
ject. A low-dimensional linear model for each tissue type
J Multimodal User Interfaces (2008) 2: 217–235 221
Fig. 2 Thirty-one bands of NIR
images of one subject [31]
for each subject was used to model illumination variation
in radiance images. Authors advocate their system claim-
ing that the algorithm provides accurate recognition perfor-
mance for front-view probes, with or without facial expres-
sion changes. They also add that the results are promising
for face recognition under unknown outdoor illumination
and various face orientations.
Another solution, including active NIR imaging hard-
ware, algorithms, and system design, is presented in [34].
The system is presented as another solution to problems cre-
ated due to illumination variation in face recognition modal-
ities. An illumination invariant face representation is ob-
tained by extracting local binary pattern (LBP) features NIR
images to compensate for the monotonic transform, thus
deriving an illumination invariant face representation. Us-
ing statistical learning algorithms the most discriminative
features are extracted from a large pool of invariant LBP
features and construct a highly accurate face matching en-
gine. For the dimensionality reduction and classification,
LBP+LDA and LBP+AdaBoost methods have been devel-
oped. For the experiments 10000 face images of about 1000
people, all Chinese, were used for training the system. Test-
ing dataset contained 3,237 images from a total of 35 per-
sons and the accuracy reported by authors was 94.4%.
In [35] the novelty compared to other NIR systems, is the
use of constant illumination for face recognition. Authors
advocate that active NIR illumination provides a constant
invisible illumination condition and facilitates the automatic
eye detection by introducing bright pupils. The result pro-
vided, indicate that the actively illuminated faces show bet-
ter separability for all classifiers than faces under varying
ambient illumination. More specifically, radial basis func-
tion (RBF), adaboost and support vector machines (SVM)
classifiers where applied on 2360 face images from 295 sub-
jects, where SVM achieved the best results with 0 error rate.
Another study that examines the effectiveness of NIR im-
ages for face recognition [36], ascribe the success of the sys-
tem presented, firstly to NIR images that, as advocate, facil-
itate the classification process and secondly, to the learning
based methods with local features, proposed in the paper.
Evaluation of the system on 1470 persons indicated an equal
error rate if 0.3%.
The main advantage of the NIR image based techniques
as already mentioned, is that it overcomes problems due
to illumination. The use of NIR images is also supposed
to provide advantages over rotated faces, expressions and
robustness over time. However, the specific modality does
not seem yet to be suitable for uncooperative user applica-
tions such as face recognition in video surveillance [34]. Al-
though many methods present impressive performance on
both indoor and outdoor conditions, near infrared technol-
ogy so far, is mainly suggested for indoor cooperative user
applications.
2.4 Smile recognition
Another method for person recognition is suggested in [37].
A high speed camera with a strong zoom lens allows smile
maps to be produced. This map is claimed to be unique for
each person. This new method compares images of a person,
taken fractions of a second apart, while the person is smil-
ing. The system probes the characteristic pattern of muscle
deformations beneath the skin of the face. The way the skin
around the mouth is moved over the video frames, is ana-
lyzed by tracking the change position and direction of tiny
wrinkles in the skin. The data is used in order to produce mo-
tion vectors describing the deformations of the facial region.
This deformation is controlled by the pattern of muscles un-
der the skin and is not affected by the size of the smile or the
presence of make-up. It is noted that a full smile is not re-
quired as the system is sensitive enough to produce a map of
features-even when people are trying to keep an unchanged
expression. The proposed technique is “invisible”, because
smile maps can be produced without the suspects knowing
222 J Multimodal User Interfaces (2008) 2: 217–235
that they are tracked. Further application of this method is
hoped to be found in medicine. Some nerve disorders cause
distinctive asymmetries in movement of facial muscles.
The system has been successfully tested so far only on
a very small database consisted of samples of 4 lab mem-
bers while smiling. The system is currently tested on a larger
group of 30 smiling faces but no results have been reported
so far.
2.5 Lip recognition
A lip deformation recognition method that uses shape sim-
ilarity when vowels are uttered is proposed in [38]. In this
method, a mathematical morphology analysis is applied on
the lip area using three different structuring elements. The
proposed structuring elements are the square, vertical and
horizontal line and they are used for deriving a pattern spec-
trum of the lip images. The shape vector is compared with
the reference vector to recognize an individual from its lip
shape as shown in Fig 3.
Experimental results show that the shape vector contains
enough information to perform recognition. In particular,
eight Japanese persons could be classified with 100.0% ac-
curacy by their lips. Of course the test set is very small, the
results may be biased and authors make this clear. They note
that the system is not sophisticated yet and classification ac-
curacy has to be improved by considering other structuring
elements (for instance, rectangle, ellipse or an asymmetric
shape). The test on a significantly larger database is incon-
testably required to assess the performance of this method.
Another approach [39] considers lips’ shape and color
features in order to determine human identity. More specifi-
cally, the method calculates color features of the masked out
lips and merges them with shape features of the binarized
lips. Color statistics and moments as well as a set of standard
geometrical parameters and the moments of Hu and Zernike.
The feature vector that finally describes lips, consists of a
selection of the most discriminant information of: Hu mo-
ments, central moments, of Zernike, standard geometrical
parameters, statistical color features in RGB, YUV and HSV
color spaces. Experiments on a database of 38 subjects show
that the method was able to recognize successfully the 76%
of the under test samples.
Although the results are promising for such an emerg-
ing technology, it is obvious that further improvement is
strongly required for a stand alone application. It is also
mentioned that lip detection, especially acquired from sur-
veillance cameras, consist a major drawback of the system.
2.6 Thermal palm recognition
Palm print recognition has been investigated for more than
10 years [40]. A large number of methods has been proposed
Fig. 3 Overview of the Lip recognition system in [38]
and many different problems have been addressed. A novel
approach for personal verification using the thermal images
of palm-dorsa vein patterns captured by an infrared cam-
era is presented in [41] (Fig. 4). Two of the finger webs are
automatically selected as the datum points to define the re-
gion of interest on the thermal images. Feature points of the
vein patterns (FPVPs) are extracted by a watershed trans-
form modified according to the properties of thermal im-
ages. The watershed transform calculates the locations of
region basin minimal (or maxima) [42]. In this case, the re-
gion maximum method is used to extract the FPVPs, while
two extra restrictions have been added. The first restriction
is, that the pixel with a high regional maximum value is also
the central point of the region. The other is that its gray value
must be larger than the mean of the pixel value inside the re-
J Multimodal User Interfaces (2008) 2: 217–235 223
gion. According to the heat conduction law (Fourier Law),
multiple features can be extracted from each FPVP for veri-
fication.
Multiresolution representations of images with FPVPs
are obtained using multiple multiresolution filters (MRFs)
that extract the dominant points by filtering miscellaneous
features for each FPVP. More specifically, three different
MRFs are used to retain the properties of multiple features
of the FPVPs at the next level resolution. The first MRF is
called moment filter and is used to construct multiscale fea-
ture point images (FPIs). The second is called mean filter
and computes the means of the xand ycoordinates as rep-
resentation for the next level resolution, while the third is
called count filter and counts the Nfeature points inside lo-
cal square windows for a representation of the next level res-
olution. A hierarchical integrating function is then applied
to integrate multiple features and multiresolution represen-
tations. The former is integrated by an inter-to-intra personal
variation ratio (weights) and the latter is integrated by a pos-
itive Boolean function.
The experimental results show rather satisfactory perfor-
mance (false rejection rate: 2.3% and false acceptance rate:
2.3%) [41]. However, there is still need for further investi-
gations to confirm performance in adverse conditions. The
effects caused by the ambient temperature, the thickness of
the skin, the degree of venous engorgement, the condition
of the vein walls and the nearness of the vein to the sur-
face, are some of the conditions that affect the recognition
rate. Finally, any variation in the surrounding temperature
may lead to unstable distribution patterns. This is one of the
main problems for this method and it is difficult to be re-
solved by relying only on the vein-pattern features in palm-
dorsum thermal images. Some issues in using palm prints
for personal identification have not been well addressed. For
instance, we know that ridges in palm prints are stable for
a person’s whole life but the stability of principal lines and
wrinkles has not been systemically investigated.
2.7 Hand/finger knuckle
In [43], a first approach of another novel biometric verifi-
cation system based on the texture of the hand knuckles is
presented. This method uses knuckle images isolated from
the hand. The wrinkle of the knuckle images are extracted to
a black and white image which is used as biometric feature.
The different repetitions of the hands are aligned according
to a reference image called “training image”. As verifiers,
the authors use a hidden Markov model and a Support Vec-
tor Machine. The feature for hidden Markov model is the
sequence of image columns, while the feature for support
vector machine is a vector with the concatenate columns of
the image.
The training samples have been chosen randomly from
the database set and the tests have been performed using
Fig. 4 Thermal images captured from four different palm-dorsa:
(a1a4), (b1b4), (c1c4), and (d1d4)[41]
different samples. In order to enhance the experimental re-
sults, the proposers of this method, repeated the training and
testing procedure ten times with different randomly chosen
training and testing sets. The testing results indicate a simi-
lar equal error rate of 0.094 for both classifiers with a data-
base consisting of 8 samples of 20 people hand. The authors
note that this is a preliminary database but they argue that
the results are encouraging for further research on the spe-
cific modality.
A more particular area of the hand is investigated in [44].
Finger knuckles are claimed to be also unique and their sur-
face can be used as a distinctive identifier. The finger geom-
etry in conjunction with the knuckle texture obtained from
a single finger image improve the overall performance of
the system. The method analyzes the texture of the normal-
ized knuckle regions in spatial and frequency domain using
two dimensional Gabor filters. The proposers of the specific
technique tested their system on 105 users and report ac-
curacy comparable to or better than other hand-based bio-
metrics systems. However, it is also reported that the perfor-
mance of finger-knuckle identification depends sensitively
on the accuracy of knuckle segmentation from the fingers or
hands being measured. Traditional texture-phase informa-
tion using knuckle lines and creases are not yet satisfactory
and further efforts are required.
2.8 Finger-vein patterns
Another method for personal identification is proposed in
[45], based on finger-vein patterns. The authors proposed
a scheme based on finger vein patterns as a scheme of bio-
metric identification utilizing biological information. A brief
224 J Multimodal User Interfaces (2008) 2: 217–235
Fig. 5 (a) Principle of personal identification using finger-vein patterns, (b) Prototype of finger-vein imaging device and examples of infrared
images of a finger [45]
idea about how the finger images are produced is illustrated
in Fig. 5. Since the finger vein images taken to obtain fin-
ger vein patterns are obtained by irradiating the fingers with
infrared rays, fluctuations in brightness due to variations in
the light power or the thickness of the finger occur.
This paper proposes a scheme for extracting global finger
vein patterns by iteratively tracking local lines from various
positions to robustly extract finger vein patterns from such
unclear images. Researchers argue that an image of a fin-
ger captured under infrared light contains not only the vein
pattern, but also irregular shading produced by the various
thicknesses of the finger bones and muscles. The proposed
method extracts the centerlines of the veins from the unclear
image by calculating the curvature of the cross-sectional
profile of the image. To obtain the vein pattern spreading in
an entire image, all the profiles in a direction are analyzed.
All the profiles in four directions are also analyzed in order
the vein pattern spreading in all directions to be obtained.
Matching, was performed using a commonly known method
for line-shaped patterns (template matching) proposed in au-
thors’ previous work [46,47].
The proposed scheme appears to be robust against bright-
ness fluctuations, compared with the conventional feature
extraction schemes. The method was tested on 678 subjects
and the evaluation results showed an equal error rate (EER)
of 0.0009%.
It is also reported, that the mismatch ratio is slightly
higher during cold weather, because the veins of the finger
can become less visible. Therefore, a device that can cap-
ture the vein pattern more clearly and a feature extraction
algorithm that is robust against these fluctuations should be
investigated. The authors consider improving their system
in another direction as well. They believe that three dimen-
sional rotation of the finger degrades the identification accu-
racy. So, they consider modifying their application in such a
way that it will force the user to place a finger in the same
position every time. This method can be easily combined
with other biometric techniques based on parts of the hand
like fingerprints, finger/hand geometry. Another main disad-
vantage of this technology, is that it cannot be easily fitted in
small devices (mobiles, cards etc.) like fingerprints. Thicker
fingers present difficulties as light penetration may be insuf-
ficient in many cases.
It is worth noting that a commercial product called “Se-
cuaVeinAttestor” is based on finger vein imaging. Full spec-
ification and characteristics of this product are given in [48].
2.9 Nail ID
A really novel biometric modality is presented in [49]. It de-
scribes a commercial product that is supposed to identify a
person by reading the information that is hidden in the finger
nail, more specifically in the nailbed. The nail and nailbed
are shown in Fig. 6. The nailbed is an essentially parallel
epidermal structure located directly beneath the fingernail.
Anyone who has suffered a mashed fingernail may have seen
one or more thin blue lines appear under the nail. The line is
blood from a damaged blood vessel from inside the nailbed.
The epidermal network beneath the nail is mimicked on the
outer-surface of the nail. Rotating one’s fingernail under a
light reveals parallel lines spaced at intervals. The human
nailbed is a unique longitudinal, tongue-in-groove spatial
arrangement of papillae and skin folds arranged in parallel
rows. During normal growth, the fingernail travels over the
nailbed in a tongue-and-groove fashion.
Keratin microfibrils within the nailbed are located at the
interface of the nailbed and the nailplate, or fingernail. The
method utilizes a broadband interferometer technique to de-
tect polarized phase changes in back-scattered light intro-
duced through the nailplate and into the birefringent cell
J Multimodal User Interfaces (2008) 2: 217–235 225
Fig. 6 (a) Schematic
representation of nail,
(b) Microscopic picture of
nailbed [49]
layer. This is similar to the ordinary process of inspecting
microscopic structures on a multi-layered semiconductor.
By measuring the phase of the maximum amplitude polar-
ized optical signal, one can reconstruct the nailbed dimen-
sions using a pattern recognition algorithm on the interfer-
ometric data. The identification process generates an one-
dimensional map of the nailbed, a numerical string much
like a “barcode” which is unique to each individual. This
design may result in an in-expensive hardware scanning as-
sembly.
This technology may be more efficient than other rel-
evant modalities, such as fingerprints and hand geometry.
The nailbed, residing beneath the nailplate, is not externally
visible and hence difficult to alter or duplicate. The inven-
tors even argue that the system can also be accessed through
surgical gloves. However so far, there is no published work
showing the true capabilities and performance of this sys-
tem.
2.10 Skin spectroscopy
In [50], a new commercial biometric technology based on
the unique spectral properties of human skin is described.
Skin is a complex organ made of multiple layers, various
mixtures of biochemical substances and distinct structures,
such as hair follicles, sweat glands and capillary beds. While
every person has skin, each person’s skin is unique. Skin
layers vary in thickness, interfaces between skin layers have
different undulations and other characteristics, collagen fi-
bres and elastic fibres in the skin layer and capillary bed
density and location differ. Cell size and density within the
skin layers, as well as in the chemical makeup of these lay-
ers, also vary from person to person.
The system hardware and software are reported to recog-
nize these skin differences and the optical effects they pro-
duce. The developed sensor illuminates a small (0.4 inch
diameter) patch of skin at multiple wavelengths (“colors”)
of visible and near infrared light. The light that is diffusely
reflected back, after being scattered in the skin, is then mea-
sured for each of the wavelengths (Fig. 7). The changes
to the light as it passes through the skin are analyzed and
processed to extract a characteristic optical pattern that is
Fig. 7 Illustration showing light undergoing optical scatter as it passes
through skin, resulting in a portion of light that is diffusely re-
flected [50]
then compared to the pattern on record or stored in the de-
vice to provide a biometric authorization.
Since the optical signal is affected by changes to the
chemical and other properties of human skin, it also pro-
vides a very sensitive and easy way to confirm that a sample
is a living human tissue. Non-human tissue or synthetic ma-
terials have very different optical properties than the human
skin, which cause a corresponding change to the resulting
optical signal. Likewise, excised or amputated tissue under-
goes rapid changes in biochemistry, temperature and distri-
bution of fluids within the various physiological compart-
ments that also alter the optical signal. These optical differ-
ences ensure that a sample authorized by the biometric sen-
sor is truly that of a living human (aliveness detection). The
sensor used to perform these non-imaging optical measure-
ments is a small, solid-state device made up of light emitting
diodes and silicon photo detectors embedded in an alumina
ceramic housing shown in Fig. 8[51]. The sensing system
has been designed to fulfill the demanding requirements of
incorporating a biometric sensor in a personal portable elec-
tronic device such as a cellular telephone, laptop or PDA.
A multi-person performance evaluation was conducted
by the investors of the solid-state spectral biometric sensor
over a 4-month period [51]. In total, 113 volunteers from
different ethnics and ages, participated in the study and were
measured over multiple visits. More than 11,000 individual
measurements were collected. Study participants were re-
quested to come in “as is” during their scheduled time. Prior
to performing the spectral measurements, an interview was
226 J Multimodal User Interfaces (2008) 2: 217–235
Fig. 8 Solid-state biometric sensor [51]
conducted to collect any potentially noteworthy information
that could potentially correlate with error sources. Many
people indicated recent applications of lotion and other top-
ical substances on their hands, and dirt was noted on some
subjects’ hands. The overall equal error rate (EER) given
for this system for single-try data is 2.7%. However, the re-
searchers maintain that the overall performance improved
remarkably after the volunteers successfully used the sen-
sor a small number of times. After each person successfully
used the sensor 20 times the overall EER obtained was de-
creased to 1.7%.
Spectroscopic approach as a biometric offers a grate ad-
vantage over other conventional technologies. Since skin is
a such a complex organ, it cannot be copied or replaced
by synthetic materials offering in parallel, liveness detec-
tion. Such an approach that examines spectroscopy as alive-
ness detection solution for biometric systems, is presented
in [52].
2.11 Ear prints
Using ears in identifying people has been a subject of inves-
tigation for at least 100 years. The researches still discuss
if ears are unique, or unique enough to be used as a bio-
metric modality. Ear shape applications are not commonly
used yet, but the topic is interesting, especially in crime in-
vestigation. Burge and Burger think that ear biometrics is a
“viable and promising new passive approach to automated
human identification” [53]. When a burglar listens at, for in-
stance, a door or window before breaking and entering, oils
and waxes on the ear leave a print that can be made visible
using techniques similar to those used when lifting finger-
prints. The ‘FearID’ research project, a collaboration of sev-
eral European institutes, was aimed at the individualisation
of such an ear print to a person. The study presented in [54]
is compiled within the framework of this project.
Ear data can be received from photographs, video or
earprints produced by pressing the ear against a firmed trans-
parent material, for instance glass. Ear print geometry is
shown in Fig. 9. The Polar axis shown in the figure, is a
common tangent to inner edge of the impression of the (on-
set of the) crus of helix and the tip of tragus. The ear print
Fig. 9 Reference points for metrical characteristics (‘cues’) of an
earprint [58]
geometry is based on the following metrical characteristics
(‘cues’):
(A) Intersection of the 290øline from tragus tip O with
the median line of the anthelix impression
(B) tangent point on the tip of the antitragus of a perpen-
dicular from the polar axis
(C) tangent point of tip of polar axis with the median line
of the (onset of the) crus of helix impression
(D) intersection point of the line extending OA with the
median line of the outer helix impression
(E) intersection point of the 345øline from tragus tip O
with the median line of the upper helix impression
(O) tangent point of polar axis with the tip of tragus
In [55] researchers suggest that the ear may have advantages
over the face for biometric recognition. Their previous ex-
perimental results working on ear and face recognition tasks,
using the standard principal component analysis, indicated
an almost equal recognition performance for the two dif-
ferent types of data. The dataset consisted of 197 subjects
used in training. Each sample had both, face and ear images
taken under the same conditions and same image acquisi-
tion session. After testing the database under pose and light-
ing variation, they found that the recognition performance
J Multimodal User Interfaces (2008) 2: 217–235 227
is not significantly different between the face and the ear.
Their published work indicates a recognition rate of 70.5%
and 71.6% with 29.5% and 28.6% false recognition rate for
the face and the ear respectively.
Although there are many methods that use ear biomet-
rics, [56], their performance is not sufficient yet. Probably
the most important argument against the use of this biomet-
ric modality comes from its discriminant capacity. A Nether-
lands court decided that the earmarks are not reliable enough
for judging [57]. It was also decided that when there are
no dependable proofs that ears are unique, ear identification
cannot be used as evidence.
2.12 Mouse dynamics
It is known that most of the currently available biometric
technologies typically require special and often expensive
equipment that hinders their widespread use. An advanta-
geous solution is based on mouse dynamics [59].
It employs a similar idea to keystroke dynamics. Key-
stroke dynamics is a common and widely known technique
since the beginning of the past decade [60]. The keystroke
dynamics method measures two distinct variables: “dwell
time”, which is the amount of time one holds down a partic-
ular key and the “flight time”, which is the amount of time it
takes a person to search and press the next appropriate key.
According to the researchers, the proposed method uses
state of the art pattern recognition algorithms combined with
artificial intelligence to provide a biometric layer over tradi-
tional password based security. The system learns an opti-
mum set of mouse-movement characteristics unique to the
user’s mouse-written signature and uses them to authenti-
cate later signatures. It can also learn over time to include
changes of the user’s mouse signature characteristics. The
main idea of this method is illustrated in Fig. 10. First the
user’s mouse dynamics data are collected through an appli-
cation that monitors the mouse movement for the specified
duration. Certain signature characteristics are extracted in
the mouse dynamics patterns, such as double-clicking speed,
movement velocity and acceleration per direction.
Fig. 10 Main idea of the mouse dynamics recognition system [59]
In order to increase the improvement of the system, re-
searchers combined the conventional keystroke dynamics
method with mouse dynamics. This way, a user must pass
two distinct tests to gain access to restricted content. The
first examines the typing style of the password and the sec-
ond the dynamics of the mouse based signature. The ad-
ditional level of security can vary according to application
needs. In trials with 41 participants, a false acceptance and
false rejection rate of around of around 4.4% and 1% respec-
tively. In these trials, it was assumed that the password was
known, whereas in reality it would not be.
In [61], the behavior characteristics from the captured
data is modelled using artificial neural networks. A graph-
ical based application involving general mouse movement,
silence, drag and drop behavior, point and click behavior,
is used to measure several attributes with respect to the
user’s usage. The authors develop a mouse dynamic signa-
ture (MDS) for each user using a variety of machine learn-
ing techniques. The data collected for the experiments com-
prise of 22 participant and was used in an off-line approach
to evaluate their detection system. The subjects were sep-
arated into two categories (clients and impostors) and the
features obtained were used to train a neural network that in
following, makes the classification. The FRR and the FAR
obtained for this study was 2.4649% respectively. This ap-
proach according to authors, could also applied for continu-
ous user authentication.
Mouse dynamics presents a number of advantages: The
system builds on already familiar user skills, like mouse
movements and users can reliably reproduce complex mouse
based signatures. The system based on neural networks, can
learn over time to incorporate changes of the users typing
and mouse signature characteristics. The specific modality
is mostly proposed as an on-line biometric verification solu-
tion. On-line banking, internet shopping, or accessing web
based e-mail, could be a few of its possible applications.
However, mouse dynamics can be applied only on those ap-
plications where a computer founds a natural match [62].
2.13 Electrocardiogram (ECG)
An electrocardiogram is an electrical recording of the heart
and is routinely used in the investigation of heart diseases.
ECG is widely known from its clinical usage and has been
used since the beginning of the 20th century for the di-
agnosis of different cardiac diseases. Recently, several re-
searchers characterized the ECG as unique to every individ-
ual [6365].
In [66] the ECG processing with quantifiable metrics was
proposed as a biometric modality. Data filters were designed
based upon the observed noise sources. Fiducial points were
identified on the filtered data and extracted digitally for
each heartbeat. From the fiducial points, stable features were
228 J Multimodal User Interfaces (2008) 2: 217–235
Fig. 11 ECG trace based upon cardiac physiology. L’ and P’ indi-
cate the start and end of atrial depolarization, the R complex indicates
ventricular depolarization, and the T complex indicates the ventricular
repolarization [66]
computed that characterize the uniqueness of an individual.
The locations of the fiducial positions, noted by an apos-
trophe (’), are illustrated in Fig. 11. Physically, the L’ and
P’ fiducials indicate the start and end of the atrial depolar-
ization. The corresponding S’ and T’ positions indicate the
start and end of ventricular repolarization. Collectively, the
fiducials describe the unique physiology of an individual.
The extracted features are based upon cardiac physiology
and have fixed positions relative to the heartbeat.
The tests show that the extracted features are independent
of sensor location, invariant to the individual’s state of anx-
iety, and unique to an individual. The above experimental
data were collected from males and females between 22 and
48 years old. Twenty-nine individuals were tested 12 repeat
times, for each of the 41 total sessions within the dataset.
Each individual session contained a set of recordings during
seven two-minute tasks. The tasks where designed to stim-
ulate different states of anxiety. Unlike conventional ECG
data, the hardware for this series of experiments collected
ECG data at a high temporal resolution of 1 ms. Trying tests
measuring the heartbeats in two different points (neck and
chest), researchers managed to classify 82% and 72% of the
heartbeats for the two different points respectively, while in
both cases 100% of subjects’ identification was achieved.
The dataset was used to identify a population of individ-
uals. Additional data collection is being tried in order to test
the scalability of the features to characterize a large popula-
tion as well as the stability of those features over long time
intervals.
In [67], researchers simplify the procedure and demon-
strate ECG’s use as a biometric under conditions that include
intra-individual variations and a simple user interface (elec-
trodes held on the pads of the subject’s thumbs). ECG person
identification was accomplished through quantitative com-
parisons of an unknown signal to enrolled signals. The quan-
titative comparisons were: the correlation coefficient and a
wavelet distance measure. It was found that the combina-
tion of these two methods provided improved performance,
relative to either individual method. ECG person identifica-
tion accuracy on 59 subjects was 90.8%. While this accuracy
is relatively low compared to conventional biometrics, such
as fingerprints, the ECG according to authors can be used
as supplementary information for a multi-modal biometric
system. A multi-modal system that includes the ECG would
have increased accuracy and robustness, without necessar-
ily requiring any change to the perceived user interface. At
minimum, the ECG would be useful in providing liveness
detection.
It is important to be mentioned that the technique is rather
difficult to use, since it requires the placement of electrodes
on subject’s body, making the enrolment and testing proce-
dures time-consuming. An evaluation on how easy an ECG
biometric system can be fooled by the morphology of the
electrocardiogram can be found in [68].
2.14 Electroencephalogram (EEG)
It has been shown that the brain activity measured in electric
waves is unique to every individual [69,70]. A new study
in [71], uses the brain wave pattern for person authentica-
tion. The authors hold that the use of EEG as a biometric
solution has several advantages as: it is confidential (as it
corresponds to a mental task), it is very difficult to mimic,
and is almost impossible to be copied or to be stolen.
In general, only a few things have been proposed in this
area and this is the first method concentrated on person au-
thentication. The authors propose a statistical framework
used in other biometric authentication approaches such as
face and speaker authentication. More specifically, they use
a statistical framework based on Gaussian Mixture Mod-
els and Maximum A Posteriori model adaptation which can
deal with only one training session. They perform intensive
experimental simulations using several strict train/test proto-
cols to show the potential of the specific method. They also
show that there are some mental tasks that are more appro-
priate for person authentication than others.
The EEG is a very noisy signal and its processing is a dif-
ficult task. For the feature extraction, researchers spatially
filter the signal by means of a surface Laplacian the EEG
raw potentials. In following they increase the signal-to-noise
ratio and extract the features that better describe the mental
state to be recognized. The choice of the electrodes and fre-
quency band is based on the expertise available in the Brain
Computer Interfaces (BCI) community [72].
The experimental results indicated that EEG could be an
effective modality for person authentication and thatthe spe-
cific method performs satisfyingly for the specific task. By
J Multimodal User Interfaces (2008) 2: 217–235 229
having a closer look on the experiment protocol though, one
can see that although the number of simulations that take
place is large, the number of individuals that are involved, is
very small (3 persons). It is obvious that no conclusions can
be drawn on such a small database. Another matter that au-
thors note, is that mismatching between testing and training
increases from day to day. So, data collected in one day is
not enough for training robust models.
After authors in [73] showed that the energy of brain po-
tentials evoked during processing of visual stimuli appear to
have potentials in applications for such as stand alone in-
dividual identification system or as a part of a multi-modal
individual identification system, they pushed their research
forward. In their following study [73], they analyze the po-
tential of dominant frequency powers in gamma band Vi-
sual Evoked Potential (VEP) signals as a biometrics. Tech-
niques used include those based on the k-Nearest Neighbors
(kNN), Elman Neural Network (ENN) classifiers, and 10-
fold Cross Validation Classification (CVC). The feature ex-
traction is achieved by a subspace technique called Multiple
Signal Classification (MUSIC) while the classification tech-
niques used include those based on the k-Nearest Neighbors
(kNN), Elman Neural Network (ENN) classifiers, and 10-
fold Cross Validation Classification (CVC). For the experi-
mental procedure of the specific work, a total of 3,560 VEP
signals from 102 subjects were used. There was a minimum
of 10 and a maximum of 50 eye blink free VEP signals
from each subject (in multiples of 10). Three different ex-
periments were conducted with features produced by the EL,
SMT, and the proposedfeatures. The maximum ENN classi-
fication accuracy for the improved feature extraction method
was 98.12 ±1.26, while the classification performances for
EL and SMT methods were 96:94 ±1:44 and 96:54 ±1:23.
For kNN, the corresponding maximum classification accu-
racies were 92.87 ±1.49, 91.94 ±1.54, and 96.13 ±1.03
and were obtained for K=1. Authors argue that their re-
sults have clearly indicated the significant potential of brain
electrical activity as a biometric.
On this research topic a recent study [74] proposes a
multitask learning approach which is in contrast with pre-
vious EEG based methods. While EEG techniques use for
classifier design and subsequent identification a single task
(signals recorded during imagination of repetitive left hand
movements or during resting with eyes open), the proposed
method uses multiple related tasks simultaneously. The ad-
vantage obtained, is that classifier learning can be more ef-
fectively guided in a hypothesis space as it integrates in-
formation from the extra tasks. For the experiments 180
recorded trials for 9 subjects where used. Accuracy rate
proved to reach 95.6% for imaging left index finger move-
ments.
Summarizing the elements provided in the specific works,
we could say that brain activity could be proven to be a
promising modality for individual authentication. As men-
tioned above, due to its special character and the advantages
that presents against other type of biometrics (confidential-
ity, difficulty of mimicry, not easy to be stolen) it could be
useful to application with special demands. There is a lot of
things to be done though in order this method to support a
full real time authentication system. The procedure requires
the absolute participation of the subject, it is dependent on
it’s current mental condition, while the placement of the
electrodes to the right position and the process of the (EEG)
signal is significantly time consuming.
2.15 Cognitive biometrics
An alternative biometric is described in [75]. In this study,
the simplicity of interface is kept while the restriction of
typing specific patterns is alleviated. The present work was
motivated by recent, independent studies in cognitive neuro-
science and psychiatry reporting that the generation of ran-
dom rhythms or numbers is a demanding cognitive task and
carries enough information to discriminate between differ-
ent clinical populations. When someone is asked to generate
(verbally or via keyboard) random numbers, there is a cog-
nitive load implied. This is due to the close interaction be-
tween short-term memory and internalized decision making
mechanisms. A closely related task is the generation of ran-
dom tapping rhythms. Finger tapping, for instance, requires
sensorimotor interaction and specific cortical networks. In-
terestingly, it has been demonstrated that everyone has his
own eigen-rhythms regulating spontaneous finger tapping.
At an experimental level, this is the first approach where
human-generated time-series of random latencies are tested
as biometric. The procedure for generating the RTI signals
is simple. The subject is asked to press the space key of the
computer with the index finger of his/her dominant hand as
irregularly as possible, until the screen shows the end of the
exercise. The first time the subject encounters this task, is
provided beforehand with an example consisting of a square
4×4 cm, which appears and disappears in the screen at ran-
dom rhythm and is synchronized with a sequence of beeps.
The particular example is indicative of the sort of time series
one has to create and—as it is explicitly stated—its exact re-
production is not the objective of the task.
Moreover, the dynamics showed a prominent idiosyn-
cratic character when realizations from different subjects
were contrasted. Researchers established an appropriate
similarity measure to systematize such comparisons and ex-
perimentally verified that it is feasible to restore someone’s
identity from RTI signals. By incorporating it in an SVM-
based verification system, which was trained and tested us-
ing a medium sized dataset (from 40 persons), an equal error
rate of 5% was achieved. The method though, has a ma-
jor drawback. The enrolment procedure at the moment takes
230 J Multimodal User Interfaces (2008) 2: 217–235
Fig. 12 Overview of the
detection system for cochlear
hearing loss [78]
almost two minutes and requires the foul user cooperation.
Such an enrolment procedure is considered as highly intru-
sive for any kind of biometric application.
2.16 Otoacoustic emissions recognition (OAE)
A research project at the University of Southampton is ex-
amining whether hearing could be effective in recognizing
individuals by otoacoustic emissions [76]. If audio clicks
are broadcasted into the human ear, a healthy ear will send
a response back [77,78]. These are called otoacoustic emis-
sions. OAE testing is often used to screen newborns for hear-
ing problems and it is done, by placing a small, soft mi-
crophone in a person’s ear canal. Sound is then introduced
through a small flexible probe inserted in the ear. The mi-
crophone detects the inner ear’s response to the sound. The
overview of the detection system for cochlear hearing loss
is illustrated in Fig. 12. The researchers are examining the
reliability of using this source as a biometric modality. From
the total of 704 measurements reported in [76], 570 (81%)
were correctly classified.
The specificity of otoacoustic emissions to an individual
and their stability over a 6 month period time is demon-
stratedin[79]. Experiments performed on 760, 561 sub-
jects and a smaller dataset indicated that otoacoustic emis-
sions are surprisingly individual. Use of simple statistic
techniques indicated an equal error rate of 3.53% with 95%
confidence improving to 2.35% at 90% confidence. The re-
search suggest a level of permanence of at least 6 months.
Even though otoacoustic emissions seems to be strange
by its nature as far as it concerns it possible applications,
it could be easily used in many commercial products. For
instance, it could be used to guard against mobile phone
theft, where such a modality could be used to check whether
the user matches the profile of the owner. It could also
be used together with a special telephone receiver for card
transactions, presumably in conjunction with a PIN number.
A cardholder would pick up the receiver and listen to a series
of clicks. His otoacoustic response would be measured and
checked against the information stored on the card and the
records held by the Credit Card Company or bank. Portable
music devices and cell phones could be equipped with an
acoustic biometric security device to prevent their use by
anyone other than a registered user.
2.17 Eye movement
A completely new type of biometric is based on eye move-
ment characteristics [80]. This work examined the reaction
of human eyes to visual stimulation. The person to be iden-
tified is asked to follow a point displayed on a computer’s
monitor. An eye tracker is used to collect information rele-
vant with the eye movement during the test. A very fast and
accurate tracking system that is based on infrared reflection
was used for this reason.
The main challenge for this system was to convert the
recorded eye movements to a set of features that may be di-
rectly used for identification. The dataset consists of probes.
Each probe is the result of recording one person’s eye move-
ments during 8 seconds stimulation lasting. The experiments
were made with frequency 250 Hz, which means that the
probe consists of 2048 single measurements. Each measure-
ment consists of six integer values, which give the position
of the stimulating point on the screen and the position of
the points the right and the left eye are looking at, respec-
tively. In order to extract a set of discriminant features, the
spectrum was used [81]. The experiment was performed on
nine subjects. Each person was enrolled more than 30 times
and the last 30 trials were used for classification, giving 270
probes for a training set. The validation experiment gave an
average false acceptance rate of about 2% and a rather high
average false rejection rate of about 25%.
The continuous movement of the eye for biometric pur-
poses is also suggested in [82]. The proposers of the method,
have conducted a case study to investigate the potential of
the eye-tracking signal. They argue that the distance be-
tween eyes proved to be the most discriminant feature (90%
identification rate). The best dynamic feature was received
from the delta pupil size which corresponds to the variation
of the pupil size in time (60% identification success). The in-
formation obtained by measuring the size of the pupil itself
proved to be week giving 40% identification. Combination
of different features does not seem to offer any considerable
J Multimodal User Interfaces (2008) 2: 217–235 231
improvement. For the experiments 12 subjects participated
with normal or corrected to normal vision.
For a comparison, the researchers created a static user
template by taking the time averages for each subject. As
long-term statistics, these were expected to carry the infor-
mation about the physiological properties of the subject’s
eyes we created a static user template by taking the time av-
erages for each subject. As long-term statistics, these were
expected to carry the information about the physiological
properties of the subject’s eyes. The dynamic user templates
were formed by considering the time signal as a feature vec-
tor. In summary, eye movement show to provide discrimina-
tory information. Considering that both the training and test
signals had the duration of 1 second, the recognition accu-
racy of 40–90% can be considered according to authors of
the method as high, especially taking into account the low
sampling rate (50 Hz).
In contrast to many biometric systems like fingerprint and
face recognition, which are based on physiological char-
acteristics, the eye movement identification combines both
physiological and behavioral (brain) characteristics. This is
an advantage against other biometric modalities, consider-
ing that aliveness detection is embodied in this method. On
the other hand, the specific method requires a conscious ef-
fort on behalf of the subject, which means that the system
would fail in the case of, e.g. a drunken person. Researchers
mention that there is a lot of work to be done to improve
their methodology. The first experiments though, show that
eye movement identification may have potentials.
2.18 Dental biometrics
Dental biometrics utilize dental radiographs for human iden-
tification. Radiographs are able to provide information about
the condition of teeth, their roots, jaw placement, and the
overall composition of the facial bones. The radiographs ac-
quired after the victims death are called postmortem (PM)
and the radiographs acquired while the victim is alive are
called antemortem (AM). A proposed method in [83]uses
this information to identify individuals in the forensic do-
main. The paper presents an automatic method for match-
ing dental radiographs that has two main stages: feature
extraction and matching. The feature extraction stage uses
anisotropic diffusion to enhance the images and a Gaussian
mixture of model to segment the dental work, if there is
any. The matching stage has three sequential steps. In the
first step (called as tooth-level), a shape registration method
aligns the tooth contours and computes the distance between
them. If dental work is present, an area-based metric is used
for matching it. The two matching distances are then com-
bined using posterior probabilities. In the second step, the
tooth correspondence is established for a PM and an AM
image and it is used to compute the similarity between the
pair of images. In the third step, the distances between sub-
jects are computed and used to retrieve the identities from a
database. Some examples of extracted tooth shapes are pre-
sented in Fig. 13.
The results provided in this paper are presented in three
main steps. The first step is matching at the tooth level,
where 414 PM and 738 AM teeth are used. In the second
step, teeth in the same rows are viewed as a unit and 166 PM
images are matched against 235 AM images. Finally, at the
third step, the identification task is performed. In this step,
11 PM subjects are matched to the 25 AM subjects. For the
two first steps, the hit rate given is 95% and 90%, respec-
tively, while for the final step the retrieving accuracy is 72%,
91% and 100%, percent according to the number of top re-
trievals used (1, 4 and 7 top retrievals).
Dental work (DW) information is exclusively used in a
newer approach [84]. The proposed method for person iden-
tification is based on dental work and consists of three main
processing steps. Firstly the segmentation of the dental work
is achieved after pre-processing of the dental radiograph im-
ages. The information obtained containing the dental work,
contains details about the position of it on both jaws, size
and distance between neighboring DW. This information ac-
tually creates a “dental code” (DC) which is finally matched
with the corresponding DC within the database.
The segmentation of the DW is performed by a snake (ac-
tive contour). Each DW is segmented with a separate snake.
In order to speed up the process and improve segmenta-
tion, the initial curves for all DWs are computed from a bi-
nary mask. Edit distance (Levenshtein distance) is used for
matching. To evaluate the proposed method, the researchers
used 68 dental radiographs from a total of 46 subjects. To
test the matching performance of the method, the imple-
mented algorithm to compares DRs of the genuine class and
DRs of the impostor class. The equal error rate obtained for
the proposed method on the above dataset, was 11%.
Although experimental results show that dental based ap-
proaches are promising, there is still a number of challenges
to overcome according to the authors [83]. First of all, for
both techniques, the experiments should run on a larger data-
base. Shape extraction is a problem for dental radiographs.
For subjects with missing teeth, other features for identifica-
tion must be explored. The method, as it is presented, exam-
ines the identification of individuals in the forensic domain
but it could be easily applied to just living persons. How-
ever, a radiographic test procedure would be extremely in-
trusive and undesirable due to X-ray radiation hazards to hu-
man health. Another image acquisition device not based on
radio-activity should be applied. Such a devise is not avail-
able right now. Is very possible to appear in the near future
though.
232 J Multimodal User Interfaces (2008) 2: 217–235
Fig. 13 Some examples of
extracted tooth shapes [83]
2.19 DNA
DNA data differ from standard biometrics in several ways.
It requires a tangible physical sample as opposed to an im-
pression, image, or recording. Their matching is not done in
real-time and, currently, not all stages of comparison are au-
tomated. Usually DNA matching does not employ templates
or feature extraction, but rather represents the comparison of
actual samples [85].
In the matching procedure, DNA is isolated and cut up
into shorter fragments containing known areas. In follow-
ing, the fragments are sorted by size using gel electrophore-
sis and are compared in different samples. A representative
example of the identification that occurs with DNA method
is described in Fig. 14 for a sexual assault case. DNA from
suspects 1 and 2 are compared to DNA extracted from se-
men evidence. In this sample, it can be seen that suspect 1
and the sperm DNA found at scene match. Suspect 2 has
a profile totally different from the semen sample. DNA iso-
lated from the victim as well as human control DNA (K562),
serve as a standard size reference and they are included as
controls [86].
DNA provides an extremely high counterfeit barrier, be-
cause a counterfeiter can never replicate the unique DNA se-
quence that identifies a person. Although DNA could be the
ultimate biometric technology, it still presents a lot of prob-
lems, as it is not yet fully automated (and fast). According
to [87,88] automatic detection is feasible. The authors mea-
sured the intrinsic charge of DNA molecules with an array of
silicon transistors, which allowed them to avoid the markers
and labels used in conventional detection techniques.
The interest the DNA identification systems raise can be
easily understood by contemplating the amount of money
that is spent every year for research on this topic. In par-
ticular, the USA federal funding that reached the amount of
232.6 million dollars for year 2004, increased by 100.7 mil-
lion dollars for the following year. This amount had been
asked to aid local, state and federal services in improving
their DNA collection systems with added funding for staff,
technology, training and assistance [89].
3 Conclusion
In this paper the emerging technologies in biometrics were
presented. There is a large number of body parts, personal,
behavioral characteristics and imaging methods that have
been suggested over the past years containing face, eyes,
mouth, teeth, ears, hands, signatures, typing styles and oth-
ers. Although the maturity of most of the proposed tech-
niques has reached a certain level, a variety of unsolved
J Multimodal User Interfaces (2008) 2: 217–235 233
Fig. 14 Example of DNA
identification [86]
problems still remain, while the demand for various kind of
applications that will be able to minister the various needs
for security, is increasing. The besetting research on new
ideas as well as the continual growth of new modalities, give
evidences of the above deficiency.
Methods that use more advanced human features and so-
phisticated electronic devices have been proposed. Thermo-
gram, ECG, DNA, veins, nails, otoacoustic emissions, skin
spectroscopy and infrared palms are some of them. How-
ever, even in the most recent technologies, there are a lot
of problems concerning the efficiency of each system. Some
techniques require expensive equipment of high technology
while others require time consuming enrolment procedures
of high intrusiveness. Although researchers publish results
that usually outperform their competitors, there is still no
system that can guarantee reliably high performance for real
security applications. Furthermore, most of the emerging
biometric systems have not been tested on large databases.
An issue for the following years would be the independent
performance analysis on multimodal data bases that would
be essential to assess performance and compare modalities
to each other. However, the remarkable variety as well as the
promptness of the new biometric methods and modalities
development, predisposes us for the amazing developments
that we will meet in the near future.
References
1. Jain AK, Flynn P, Ross AA (2007) Handbook of biometrics.
Springer, New York
2. Kotropoulos C, Tefas A, Pitas I (2000) Frontal face authentica-
tion using discriminating grids with morphological feature vec-
tors. IEEE Trans Multimedia 2:14–26
3. Wayman JL, Jain AK, Maltoni D, Maio D (2004) Biometric sys-
tems: technology, design and performance evaluation. Springer,
New York
4. Jain AK, Li SZ (2005) Handbook of face recognition. Springer,
New York
5. Zhao W, Chellappa R, Rosenfeld A, Phillips P (2000) Face recog-
nition: a literature survey. UMD CfAR Technical Report CAR-
TR-948
6. Tefas A, Kotropoulos C, Pitas I (2001) Using support vector ma-
chines to enhance the performance of elastic graph matching for
frontal face authentication. IEEE Trans Pattern Anal Mach Intell
23:735–746
7. Messer K, Kittler J, Sadeghi M, Marcel S, Marcel C, Bengio S,
Cardinaux F, Sanderson C, Czyz J, Vandendorpe L, Srisuk S,
Petrou M, Kurutach W, Kadyrov A, Paredes R, Kepenekci B,
Tek F, Akar G, Deravi F, Mavity N (2003) Face verification com-
petition on the xm2vts database. In: AVBPA03, Guildford, United
Kingdom, pp 964–974
8. Messer K, Kittler J, Sadeghi M, Hamouz M, Kostin A, Car-
dinaux F, Marcel S, Bengio S, Sanderson C, Poh N, Rodriguez Y,
Czyz J, Vandendorpe L, McCool C, Lowther S, Sridharan S, Chan-
dran V, Vidal PRE, Bai L, Shen L, Wang Y, Yueh-Hsuan C, Hsien-
Chang L, Yi-Ping H, Heinrichs A, Muller M, Tewes A, Mals-
burg Cvd, Wurtz R, Wang Z, Xue F, Ma Y, Yang Q, Fang C,
Ding X, Lucey S, Goss R, Schneiderman H (2004) Face authenti-
cation test on the banca database. In: ICPR04, Cambridge, United
Kingdom, pp IV: 523–532
9. Wang Y, Acero LD (2005) Spoken language understanding. Signal
Process Mag IEEE 22:16–31
10. Markowitz JA (2000) Voice biometrics. In: Communications of
the ACM, vol 43. ACM, New York, pp 66–73
11. Faundez-Zanuy E, Monte-Moreno M (2005) State-of-the-art in
speaker recognition. IEEE Aerosp Electron Syst Mag 20:7–12
12. Blackburn T, Butavicius M, Graves I, Hemming D, Ivancevic V,
Johnson R, Kaine A, McLindin B, Meaney K, Smith B, Sunde J
(2002) Biometrics technology review. Australian department of
Defense science and Technology Organisation
13. Cappelli R, Maio D, Maltoni D, Wayman J, Jain A (2006) Perfor-
mance evaluation of fingerprint verification systems. IEEE Trans
Pattern Anal Mach Intell 28:3–18
14. Nixon MS, Tan TN, Chellappa R (2005) Human identification
based on gait. International series on biometrics. Springer, Berlin.
AK Jain, D Zhang (eds)
15. Nixon MS, Carter JN (2006) Human ID based on gait. Proc IEEE
94(11):2013–2024
16. Boyd JE, Little JJ (2005) Biometric gait recognition. In: Lecture
notes in computer science, vol 3161. Springer, Berlin, pp 19–42
17. Cunado D, Nixon MS, Carter JN (2003) Automatic extraction and
description of human gait models for recognition purposes. Com-
put Vis Image Underst 90:1–41
18. Abdelkader B, Cutler R, Nanda H, Davis L (2001) Eigengait:
motion-based recognition of people using image self-similarity.
In: Audio- and video-based biometric person authentication,
Halmstad, Sweden
19. Boyd JE (2001) Video phase-locked loops in gait recognition.
In: International conference on computer vision, Vancouver, BC,
pp 696–703
234 J Multimodal User Interfaces (2008) 2: 217–235
20. Goffredo M, Seely RD, Carter JN, Nixon MS (2008) Marker-
less view independent gait analysis with self-camera calibration.
In: IEEE international conference on automatic face and gesture
recognition, Amsterdam, The Netherlands, pp 17–19
21. Bouchrika I, Nixon M (2008) Exploratory factor analysis of gait
recognition. In: 8th IEEE international conference on automatic
face and gesture recognition, Amsterdam, The Netherlands
22. Equinox: Face database, equinoxsensors.com/products/HID.html
23. Socolinsky DA, Selinger A, Neuheisel JD (2003) Face recogni-
tion with visible and thermal infrared imagery. Comput Vis Image
Underst 91:72–114
24. Turk M, Pentland AP (1991) Eigenfaces for recognition. J Cogn
Neurosci 3:71–86
25. Sim T, Sukthankar R, Mullin M, Baluja S (2000) Memory-based
face recognition for visitor identification. In: FG’00: Proceedings
of the fourth IEEE international conference on automatic face and
gesture recognition 2000. IEEE Comput Soc, Los Alamitos, p 214
26. Buddharaju P, Pavlidis I, Kakadiaris I (2004) Face recognition in
the thermal infrared spectrum. In: CVPRW’04: Proceedings of
the 2004 conference on computer vision and pattern recognition
workshop (CVPRW’04), vol 8. IEEE Comput Soc, Los Alamitos,
p 133
27. Gyaourova A, Bebis G, Pavlidis I (2004) Fusion of infrared and
visible images for face recognition. In: European conference on
computer vision (ECCV), May, Prague, pp 11–14
28. Singh S, Gyaourova A, Bebis G, Pavlidis I (2004) Infrared and
visible images fusion for face recognition. In: SPIE defence and
security symposium (biometric technology for human identifica-
tion), April, Orlando, pp 12–16
29. Buddharaju P, Pavlidis IT, Tsiamyrtzis P, Bazakos M (2007)
Physiology-based face recognition in the thermal infrared spec-
trum. IEEE Trans Pattern Anal Mach Intell 29(4):613–626
30. Pavlidis I, Tsiamyrtzis P, Manohar C, Buddharaju P (2006) Bio-
metrics: face recognition in thermal infrared. In: Biomedical eng
handbook. CRC Press, Boca Raton
31. Zhihong Pan MP, Healey G, Tromberg B (2003) Face recognition
in hyperspectral images. In: IEEE computer society conference
on computer vision and pattern recognition (CVPR 2003), June,
Madison, WI, USA, pp 334–339
32. Zhihong Pan MP, Healey G, Prasad M, Tromberg B (2003) Face
recognition in hyperspectral images. IEEE Trans Pattern Anal
Mach Intell 25:1552–1560
33. Zhihong P, Healey G, Tromberg B (2007) Hyperspectral face
recognition under unknown illumination. Opt Eng 46(7):077201–
077209
34. Li SZ, Chu R, Liao S, Zhang L (2007) Illumination invariant face
recognition using near-infrared images. IEEE Trans Pattern Anal
Mach Intell 29:627–639
35. Zou X, Kittler J, Messer K (2005) Face recognition using active
near-IR illumination. In: Proc British machine vision conf, Sept
2005
36. Li SZ, Chu RF, Ao M, Zhang L, He R (2006) Highly accurate and
fast face recognition using near infrared images. In: Proc IAPR
int’l conf biometric, Jan 2006, pp 151–158
37. Guan E, Rafailovich-Sokolov S, Afriat I, Rafailovich M, Clark R
(2004) Analysis of the facial motion using digital image speckle
correlation. In: Mechanical properties of bio-inspired and biolog-
ical materials, V MRS fall Meeting, December 2004
38. Makoto Omata TH, Hangai S (2001) Lip recognition using mor-
phological pattern spectrum. In: AVBPA, pp 108–114
39. Choras M (2008) Human lips recognition. In: Computer recogni-
tion systems 2. ASC, vol 45. Springer, Berlin, pp 838–843
40. Kong A, Zhang D, Kamel M (2009) A survey of palmprint recog-
nition. Pattern Recogn 42(7):1408–1418
41. Lin C, Fan K (2004) Biometric verification using thermal images
of palm-dorsa vein patterns. IEEE Trans Circuits Syst Video Tech-
nol 14:199–213
42. Beucher S (1991) The watershed transformation applied to image
segmentation. In: Conference on signal and image processing in
microscopy and microanalysis, September 1991, pp 299–314
43. Ferrer M, Travieso C, Alonso J (2005) Using hand knuckle texture
for biometric identification. In: 9th annual international Carnahan
conference on security technology (CCST’05), pp 74–78
44. Ravikanth C, Kumar A (2007) Biometric authentication using
finger-back surface. In: IEEE computer society conference on
computer vision and pattern recognition (CVPR), Los Alamitos,
CA,USA,pp16
45. Miura N, Nagasaka A, Miyatake T (2007) Extraction of finger-
vein patterns using maximum curvature points in image profiles.
The Institute of Electronics, Information and Communication En-
gineers, vol 90(8), pp 1185–1194
46. Miura N, Nagasaka A, Miyatake T (2004) Feature extraction of
finger vein patterns based on iterative line tracking and its appli-
cation to personal identification. Syst Comput Jpn 35:61–71
47. Miura N, Nagasaka A, Miyatake T (2004) Feature extraction of
finger-vein patterns based on repeated line tracking and its appli-
cation to personal identification. Mach Vis Appl 15:194–203
48. Hitachi, http://www.hitachi-hec.co.jp/virsecur/secua_vein/vein01.
htm
49. Nail ID. BIOPTid the human barcode (2009). Biometrics systems
division, http://www.humanbarcode.com
50. Lumidigm (2009) http://www.lumidigm.com/index.html
51. Rowe RK, Corcoran SP, Nixon K, Biometric identity determina-
tion using skin spectroscopy, Lumidigm, Inc, 800 Bradbury SE,
Suite 213, Albuquerque, NM, USA 87106. www.lumidigm.com
52. Davar P (2008) Spectroscopically enhanced method and system
for multi-factor biometric authentication. IEICE Trans Inf Syst
E91-D(5):1369–1379
53. Burge M, Burger W (2000) Ear biometrics in computer vision. In:
ICPR, pp 2822–2826
54. Meijerman L, Sholl S, De Conti F, Giacon M, van der Lugt C,
Drusini A, Vanezis P, Maat G (2003) Exploratory study on classifi-
cation and individualisation of earprints. Forensic Sci Int 140:91+
55. Chang KI, Bowyer KW, Sarkar S, Victor B (2003) Comparison
and combination of ear and face images in appearance-based bio-
metrics. IEEE Trans Pattern Anal Mach Intell 25:1160–1165
56. Moreno B, Sanchez A, Velez J (1999) On the use of outer ear im-
ages for personal identification in security applications. In: IEEE
33rd annual international Carnahan conference on security tech-
nology, pp 469–476
57. Forensic Evidence, http://forensic-evidence.com/site/id/idearnews.
html
58. Maat G (1999) Ear print project-brief report on the pilot study.
In: Barge’s anthropologica, Leiden University Medical Centre,
September–November 1999
59. McOwan P, Everitt R, Artificial intelligence to increase security of
online shopping and banking, Queen Mary, University of London.
http://www.qmw.ac.uk/poffice/nr270803.shtml
60. Bleha S, Slivinsky C, Hussien B (1990) Computer-access security
systems using keystroke dynamics. IEEE Trans Pattern Anal Mach
Intell, 1217–1222
61. Ahmed AE, Traore I (2007) A new biometric technology based
on mouse dynamics. IEEE Trans Dependable Secure Comput
4(3):165–179
62. Revett K, Jahankhani H, Magalhaes ST, Santos HMD (2008)
A survey of user authentication based on mouse dynamics. In:
Communications in computer and information science, vol 12.
Springer, Berlin, Heidelberg, pp 210–219
63. Biel L, Pettersson O, Philipson L, Wide P (2001) ECG analy-
sis: A new approach in human identification. IEEE Trans Instrum
Meas 50:808–812
64. Hoekema R, Uijen G, van Oosterom A (2001) Geometrical as-
pect of the interindividual variability of multilead ECG recordings.
IEEE Trans Biomed Eng 48:551–559
J Multimodal User Interfaces (2008) 2: 217–235 235
65. Irvine J, Wiederhold B, Gavshon L, Israel S, McGehee S,
Meyer R, Wiederhold M (2001) Heart rate variability: a new bio-
metric for human identification. In: International conference on ar-
tificial intelligence (IC-AI’01), Las Vegas, Nevada, pp 1106–1111
66. Israel SA et al (2005) ECG to identify individuals. Pattern Recogn
38(1):133–142
67. Chan ADC, Hamdy MM, Badre A, Badee V (2006) Person iden-
tification using electrocardiograms. Canadian conference on elec-
trical and computer engineering, CCECE’06, May 2006, pp 1–4
68. Tsao YT, Shen TW, Ko TF, Lin TH (2007) The morphology of the
electrocardiogram for evaluating ECG biometrics. In: 9th inter-
national conference on e-health networking, application and ser-
vices, 19–22 June 2007, pp 233–235
69. Marcel S, Millán J del R (2003) A new method to identify indi-
viduals using signals from the brain. In: Proceedings of the 4th
international conference on information communication and sig-
nal processing, Singapore, pp 15–18
70. Poulos VCM, Rangoussi M, Evangelou A, Person identification
based on parametric processing on the EEG. In: Proceedings of the
sixth international conference on electronics, circuits and systems,
vol 1, pp 283–286
71. Marcel S, Millán J del R (2007) Person authentication using brain-
waves (EEG) and maximum a posteriori model adaptation. IEEE
Trans Pattern Anal Mach Intell 29(4):743–752. Special issue on
biometrics
72. Millán J (2002) Brain-computer interfaces. In: Arbib MA (ed)
The handbook of brain theory and neural networks, 2nd edn. MIT
Press, Cambridge
73. Palaniappan R, Mandic DP (2007) Biometrics from brain elec-
trical activity: A machine learning approach. IEEE Trans Pattern
Anal Mach Intell 29:738–742
74. Sun S (2008) Multitask learning for EEG-based biometrics. In:
19th international conference on pattern recognition, ICPR 2008,
8–11 Dec 2008, pp 1–4
75. Laskaris N, Zafeiriou S, Garefa A (2009) Use of random
time-interval (RTIs) for biometric verification. Pattern Recogn
42(11):2787–2796
76. Hoppe U, Weiss S, Stewart RW, Eysholdt U (2001) An automatic
sequential recognition method for cortical auditory evoked poten-
tials. IEEE Trans Biomed Eng 48(2):154–164
77. Dietl H, Weiss S (2004) Cochlear hearing loss detection system
based on transient evoked otoacoustic emissions. In: Proceedings
of IEEE EMBSS postgraduate conference, Southampton
78. Dietl H, Weiss S (2004) Parameterisation of transient evoked
otoacoustic emissions. In: Proceedings of biosignal international
EURASIP conference
79. Swabey MA, Chambers P, Lutman ME, White NM, Chad JE,
Brown AD, Beeby S (2009) The biometric potential of transient
otoacoustic emissions. Int J Biom 1(3):349–364
80. Kasprowski P, Ober J (2004) Eye movements in biometrics. In:
ECCV workshop BioAW, pp 248–258
81. Rabiner LR, Schafer RW (1978) Digital processing of speech sig-
nals. Prentice Hall, Englewood Cliffs
82. Bednarik R, Kinnuenen T, Mihaila A, Franti P (2005) Eye-
movements as a biometric. In: Lecture notes in computer science,
vol 3540. Springer, Berlin, Heidelberg, pp 780–789
83. Chen H, Jain AK (2005) Dental biometrics: alignment and match-
ing of dental radiographs. IEEE Trans Pattern Anal Mach Intell
27:1319–1326
84. Hofer M, Maranara AN (2007) Dental biometrics: human identi-
fication based on dental work information. In: IEEE proceedings
of the XX Brazilian symposium on computer graphics and image
processing, pp 281–286
85. International Biometric Group, http://www.biometricgroup.com/
index.html
86. Meeker-O’Connell A, How evidence works. How stuff works.
http://www.howstuffworks.com/dna-evidence.htm
87. Pouthas F, Gentil C, Cote D, Bockelmann U (2004) DNA de-
tection on transistor arrays following mutation-specific enzymatic
amplification. Appl Phys Lett 84:1594–1596
88. Pouthas F, Gentil C, Cote D, Zeck G, Straub B, Bockelmann U
(2004) Spatially resolved electronic detection of biopolymers.
Phys Rev E 70(1–8):031906
89. Yen RC (2004) DNA typing and prospects for biometrics. In: Bio-
metric identification Seminar Forensic, Department of Defence,
Biometric Management Office, Miltreck Systems Inc USA, 16
June 2004
... Biometric characteristics can be used to distinguish one individual from another on the basis of body parts, personal characteristics or behaviour (Goudelis, Tefas, & Pitas, 2009). Personal characteristics include those derived from; face, fingerprints, iris, voice, gait, and ears (Goudelis et al., 2009). ...
... Biometric characteristics can be used to distinguish one individual from another on the basis of body parts, personal characteristics or behaviour (Goudelis, Tefas, & Pitas, 2009). Personal characteristics include those derived from; face, fingerprints, iris, voice, gait, and ears (Goudelis et al., 2009). A biometric system is defined as a pattern recognition program that obtains biometric data from a particular individual, using these extracted features for comparison (Jain, Ross, & Prabhakar, 2004). ...
Article
Full-text available
Some sexually motivated serial killers target victims on the basis of appearance. Therefore, multiple victims of a single serial killer are likely to have some facial features and geometries that are similar. The current research was undertaken to propose a technique, termed face similarity linkage, to evaluate whether victims of a serial killer have statistically more similar facial measurements than a randomly chosen person of the same gender. To test this, three of Ted Bundy's victims were randomly selected and anatomical landmarks were located and measured to produce proportionality indices of their faces. A random subject from an online database was used as a comparison. The results showed there were no statistically significant differences between the three of Bundy's victims, however there was significant difference between 11 of the 17 facial measurements of Bundy's victims when compared to a random person. This research serves as a proof of concept that, with more advanced means of data collection, may be a useful tool for law enforcement for linking serial homicides. The current method is relatively novel, and in need of expert systems interfaces to improve speed and application, which is outlined in the current study.
... This technology measures the time taken to type word, time, pressure, and speed while hitting the keys. This technology is still developing [49]. ...
... The smile is captured by high-speed cameras. The slight movements of the lips, wrinkles in the skin, and muscle movement around the lips are observed in the acquired smile map [49,51]. ...
Article
Full-text available
Recognizing human beings through biometric properties is a trending innovation. In the past, more attention has been given to biometric recognition because of its importance in security. Out of the current biometric recognition, the fingerprint is classified as a practical method. Methods including steganography and watermarking are being employed to enhance the security of biometric information. Watermarking is a way of implanting data in a carrier file to prevent the copyright/ownership of music, image or video folders. However, the technique involves in concealing information is referred to as steganography. This study outlines the summary of methods used in protecting biometric data present in the fingerprint. Moreover, the advantages, disadvantages and applications of biometrics techniques are explained.
... The parameters of the transformation are modified to provide updatable templates. The transformation can be done in the signal domain or in the feature domain (Goudelis et al. 2008). The advantage of the non-invertible transforms is that the attacker cannot reconstruct the original biometric templates even when the transforms are compromised (Abaza et al. 2013;Bowyer et al. 2008). ...
Article
Full-text available
Traditional biometric systems are subjected to several attacks. If the original database of human biometrics is compromised, the biometric data is lost forever. This paper presents a sophisticated implementation of one of the watermarking algorithms in the field of cancelable speaker identification. A watermark strength factor is used to control the level of intended distortion created in the speech signal prior to the identification process. The practical simulation of the proposed system proves that it is possible through a watermarking algorithm to distort speech signals intentionally to generate cancelable speech templates.
... Other emerging modalities to research and to possibly implement in Humanode's verification system are as follows [52]: smile recognition, thermal palm recognition, hand/finger knuckle, magnetic fingerprints/smart magnet, nail ID, eye movement, skin spectroscopy, body salinity, otoacoustic emission recognition (OAE), mouse dynamics, palate, dental biometrics, and cognitive biometrics. The different biometrics techniques are discussed. ...
Preprint
Full-text available
The advent of blockchain technology has led to a massive wave of different decentralized ledger technology (DLT) solutions. Such projects as Bitcoin and Ethereum have shifted the paradigm of how to transact value in a decentralized manner, but their various core technologies have their own advantages and disadvantages. This paper aims to describe an alternative to modern decentralized financial networks by introducing the Humanode network. Humanode is a network safeguarded by cryptographically secure bio-authorized nodes. Users will be able to deploy nodes by staking their encrypted biometric data. This approach can potentially lead to the creation of a public, permissionless financial network based on consensus between equal human nodes with algorithm-based emission mechanisms targeting real value growth and proportional emission. Humanode combines different technological stacks to achieve a decentralized, secure, scalable, efficient, consistent, immutable, and sustainable financial network: 1) a bio-authorization module based on cryptographically secure neural networks for the private classification of 3D templates of users' faces 2) a private Liveness detection mechanism for identification of real human beings 3) a Substrate module as a blockchain layer 4) a cost-based fee system 5) a Vortex decentralized autonomous organization (DAO) governing system 6) a monetary policy and algorithm, Fath, where monetary supply reacts to real value growth and emission is proportional. All of these implemented technologies have nuances that are crucial for the integrity of the network. In this paper we address these details, describing problems that might occur and their possible solutions. The main goal of Humanode is to create a stable and just financial network that relies on the existence of human life.
... Other emerging modalities to research and to possibly implement in Humanode's verification system are as follows [52]: smile recognition, thermal palm recognition, hand/finger knuckle, magnetic fingerprints/smart magnet, nail ID, eye movement, skin spectroscopy, body salinity, otoacoustic emission recognition (OAE), mouse dynamics, palate, dental biometrics, and cognitive biometrics. The different biometrics techniques are discussed. ...
Research
Full-text available
The pursuit of new decentralized financial networks has engulfed the world. In the last few decades, dozens of different protocol prototypes have been deployed to achieve a decentralized state of finance, but most are unable to overcome plutocratic governing systems that derive from the principles upon which Proof-of-Work (PoW) and Proof-of-Stake (PoS) heavily rely. The advent of blockchain technology has led to a massive wave of different decentralized ledger technology (DLT) solutions. Such projects as Bitcoin and Ethereum have shifted the paradigm of how to transact value in a decentralized manner, but their various core technologies have their own advantages and disadvantages. This paper aims to describe an alternative to modern decentralized financial networks by introducing the Humanode network. Humanode is a network safeguarded by cryptographically secure bio-authorized nodes. Users will be able to deploy nodes by staking their encrypted biometric data. This approach can potentially lead to the creation of a public, permissionless financial network based on consensus between equal human nodes with algorithm-based emission mechanisms targeting real value growth and proportional emission. Humanode combines different technological stacks to achieve a decentralized, secure, scalable, efficient, consistent, immutable, and sustainable financial network: • a bio-authorization module based on cryptographically secure neural networks for the private classification of 3D templates of users' faces • a private Liveness detection mechanism for identification of real human beings • a Substrate module as a blockchain layer • a cost-based fee system • a Vortex decentralized autonomous organization (DAO) governing system • a monetary policy and algorithm, Fath, where monetary supply reacts to real value growth and emission is proportional All of these implemented technologies have nuances that are crucial for the integrity of the network. In this paper we address these details, describing problems that might occur and their possible solutions. The Humanode core acknowledges the power of liveness detection and internal multimodal biometric processing methods that, implemented properly, will tremendously increase resistance against Sybil attacks and overcome the challenges and limitations of modern biometric authentication and identification systems. The main goal of Humanode is to create a stable and just financial network that relies on the existence of human life.
... Biometric technologies are continuously being developed by researchers applying their imagination of possible futures in the context of border crossing, often coupled with ideas of potential security threats. Known modalities are constantly being refined, according to researchers, in order to prevent 'spoofing' and enhance security ( Goudelis et al. 2009 ). Current efforts to improve the technologies of fingerprints and facial recognition include combining them with skin texture recognition and the detection of pulse and patterns under the somatic surface. ...
Book
Full-text available
This book brings together research on migration, technology, policy, visuality and the body by exploring how these fields of investigation meet and intersect in practice in the highly topical and politically charged border world where biometric technologies have become a central method of controlling migration and mobility. Based on anthropological fieldwork, the book focuses on the daily activities and experiences of different actors in border crossing, and avoids the top-down approach in which human agency is marginalized by state apparatuses and their technologies of control. It thereby shows that human activities and choices are the central driving forces. People may be subject to technological control, but this does not take away their capacity to think, invent, move and defy policies of control. State apparatuses may be presented as well-oiled machines, but in practice, they are complex, wide-ranging assemblages of often conflicting agendas, practices and aspirations. Technologies may seem to be scientific, efficient and objective modes of administration and control, but they are made by humans and do not work in mechanical isolation, without human intervention and interpretation. Our joint endeavour has therefore been governed by a wish to look more deeply into the many daily human and technological processes involved in managing Europe’s borders, whether they involve attempts to circumvent them, develop them, protect them or avoid being immobilized by them. https://www.routledge.com/The-Biometric-Border-World-Technology-Bodies-and-Identities-on-the-Move/Olwig-Grunenberg-Mohl-Simonsen/p/book/9780367199586
Chapter
Biometrics has received a lot of attention in the last few years both from the academic and business communities. It has emerged as a preferred alternative to traditional forms of identification, like card IDs, which are not emedded into one’s physical characteristics. Research into several biometric modalities including face, fingerprint, iris, and retina recognition has produced varying degrees of success [1]. Face recognition stands as the most appealing modality, since it is the natural mode of identification among humans and totally unobtrusive. At the same time, however, it is one of the most challenging modalities [2]. Faces are 3D objects with rich details that vary with orientation, illumination, age, and artifacts (e.g., glasses).
Book
Biometric recognition, or simply biometrics, is a rapidly evolving field with applications ranging from accessing one's computer, to gaining entry into a country. Biometric systems rely on the use of physical or behavioral traits, such as fingerprints, face, voice and hand geometry, to establish the identity of an individual. The deployment of large-scale biometric systems in both commercial (e.g., grocery stores, amusement parks, airports) and government (e.g., US-VISIT) applications, increases the public's awareness of this technology. This rapid growth also highlights the challenges associated with designing and deploying biometric systems. Indeed, the problem of biometric recognition is a grand challenge in its own right. The past five years have seen a significant growth in biometric research resulting in the development of innovative sensors, robust and efficient algorithms for feature extraction and matching, enhanced test methodologies and novel applications. These advances have resulted in robust, accurate, secure and cost effective biometric systems. The Handbook of Biometrics -- an edited volume by prominent invited researchers in biometrics -- describes the fundamentals as well as the latest advancements in the burgeoning field of biometrics. It is designed for professionals, practitioners and researchers in biometrics, pattern recognition and computer security. The Handbook of Biometrics can be used as a primary textbook for an undergraduate biometrics class. This book is also suitable as a secondary textbook or reference for advanced-level students in computer science.
Chapter
Biometric recognition, or simply biometrics, is the science of establishing the identity of a person based on physical or behavioral attributes. It is a rapidly evolving field with applications ranging from securely accessing ones computer to gaining entry into a country. While the deployment of large-scale biometric systems in both commercial and government applications has increased the public awareness of this technology, "Introduction to Biometrics" is the first textbook to introduce the fundamentals of Biometrics to undergraduate/graduate students. The three commonly used modalities in the biometrics field, namely, fingerprint, face, and iris are covered in detail in this book. Few other modalities like hand geometry, ear, and gait are also discussed briefly along with advanced topics such as multibiometric systems and security of biometric systems. Exercises for each chapter will be available on the book website to help students gain a better understanding of the topics and obtain practical experience in designing computer programs for biometric applications. These can be found at: http://www.csee.wvu.edu/~ross/BiometricsTextBook/.Designed for undergraduate and graduate students in computer science and electrical engineering, "Introduction to Biometrics" is also suitable for researchers and biometric and computer security professionals.
Chapter
This chapter introduces the field of brain—computer interfaces (BCIs), also called brain-machine interfaces (BMIs), which has seen impressive achievements over the past few years. It first reviews the different kinds of brain signals that can be recorded as input for a BCI. Then, the chapter discusses a series of principles to build efficient BCIs that are independent of the particular signal of choice. There follows a short overview of BCI attempts to improve the outcome of neurorehabilitation, especially for motor control in stroke patients. The chapter concludes by pinpointing some future research directions in the field of BCI.
Conference Paper
A novel personal verification method using the thermal images of palm-dorsa vein-patterns is presented in this paper. The characteristics of the proposed method are that no prior knowledge about the objects is necessary and the parameters can be set automatically In our work, an infrared (IR) camera is adopted as the input device to capture the thermal images of palm-dorsa. According to the heat conduction law (the Fourier law), multiple features can be extracted from each feature points of the vein-patterns (FPVPs). Multiresolution representations of images with FPVPs are obtained using multiple multiresolution filters (MRFs) that extract the dominant points by filtering miscellaneous features for each FPVP. A hierarchical integrating function is then applied to integrate multiple features and multiresolution representations. We also introduce a logical and reasonable method to select a trained threshold for verification. The experimental results demonstrate that our proposed approach is valid and effective for vein-pattern verification.