Conference PaperPDF Available

A Novel Fingerprint Quality Assessment Based on Gabor Filters

Authors:

Abstract and Figures

Fingerprints are the most widely deployed biometric characteristics. However, the recognition of a fingerprint may be influenced by a lot of factors (e.g., skin conditions, sensor conditions) and a matching algorithm is highly affected by the quality of the images involved. This work proposes a novel method for Fingerprint Quality Assessment (FQA) based on the analysis of the Gabor filters response on a fingerprint image. The correlation between the worst quality templates and the matching score has also been analyzed. The method is validated on FVC2000DB3, FVC2004DB2, FVC2004DB3, and FVC2006DB3 databases. This work was compared to other FQAs in order to evaluate performance and with different matching algorithms for fair comparison. The results found pointed that the proposed method is able to identify the images which most affect the error rates of an AFIS, better than the other methods presented in the literature.
Content may be subject to copyright.
A Novel Fingerprint Quality Assessment Based on
Gabor Filters
Igor Andrezza, Erick Borges
and Arnaldo Gualberto
Vsoft Tecnologia
Jo˜
ao Pessoa-PB, Brazil
Email: igorlpa90@gmail.com,
erickvagnerr@gmail.com and
arnaldo.g12@gmail.com
Jo˜
ao Brasileiro
and Herman Gomes
Federal University of Campina Grande
Campina Grande-PB, Brazil
Email: joaojanduy@copin.ufcg.edu.br
and hmg@computacao.ufcg.edu.br
Leonardo Batista
Federal University of Paraiba
Jo˜
ao Pessoa-PB, Brazil
Email: leonardo@ci.ufpb.br
Abstract—Fingerprints are the most widely deployed biometric
characteristics. However, the recognition of a fingerprint may
be influenced by a lot of factors (e.g., skin conditions, sensor
conditions) and a matching algorithm is highly affected by the
quality of the images involved. This work proposes a novel
method for Fingerprint Quality Assessment (FQA) based on the
analysis of the Gabor filters response on a fingerprint image. The
correlation between the worst quality templates and the matching
score has also been analyzed. The method is validated on
FVC2000DB3, FVC2004DB2, FVC2004DB3, and FVC2006DB3
databases. This work was compared to other FQAs in order to
evaluate performance and with different matching algorithms
for fair comparison. The results found pointed that the proposed
method is able to identify the images which most affect the error
rates of an AFIS, better than the other methods presented in the
literature.
I. INTRODUCTION
The humans are able to identify each other based on their
voice, appearance, or gait since the first signs of rationality
and social conviviality. Over the years, social development
has resulted in the increase of biometric identification usage,
mainly to identify criminal recurrence and access control [1].
With the development of technology, biometric verification
and identification in large-scale have become a trivial task,
being used for criminal identification, perform remote financial
transactions or boarding a commercial flight [2].
The use of token-based (e.g., ID cards) or knowledge-based
(e.g., passwords) systems have become common. However,
these traditional mechanisms of establishing a person’s identity
can be easily lost, shared, manipulated or stolen. It is possible,
by using biometrics, to establish an identity based on who you
are, rather than by what you possess or what you remember. In
some applications, biometrics can be used to supplement the
traditional methods, imparting an additional level of security
(dual-factor authentication) [2].
Fingerprints are the most extensively used biometric char-
acteristics because of the well-known persistence and distinc-
tiveness properties of fingerprints, as well as the cost and
the maturity of products [3]. Due to its advantages towards
other methods, fingerprint recognition has become a usual
standard routine in forensics. Besides, agencies have been set
up worldwide, and criminal databases have been established.
For instance, the FBI fingerprint identification division was set
up in 1924 with a database of 810,000 fingerprint samples [2],
[4].
With the rapid expansion of fingerprint in forensics, the
manual fingerprint identification became unfeasible. Thus, 40
years later, the agencies began to invest a significant amount
of effort in developing Automatic Fingerprint Identification
Systems (AFIS). Nowadays AFISs are used by most law
enforcement agencies in the world [3].
A wide variety of factors influence the quality of a finger-
print image, such as skin conditions (e.g., dryness, moisture,
dirt, cuts, and bruises), sensor conditions (e.g., dirt, noise,
size), and other acquisition conditions like user cooperation
or crime scene preservation in forensic settings, etc. The
recognition performance of a fingerprint matcher is strongly
affected by the quality of the images involved [3], [5]–
[7]. Depending on the quality, the ridges-valley flow is well
evident and a reliable set of minutiae can be extracted. If the
fingerprint is very noisy, the minutiae extraction algorithm may
detect a large number of spurious minutiae and miss several
genuine minutiae [8], [9].
Methods to measure the quality of fingerprint images have
high importance to control matching error by, for instance,
requesting additional sample data in case of a low-quality ac-
quisition or adapting feature extraction and matching schemes
to low-quality areas. As a consequence of the importance of
fingerprint image quality assessment, a considerable number
of studies have been dealing with the question of how to
best estimate the quality of fingerprints for AFIS [9]–[11] and
recently also for latent (forensic) fingerprints [12], [13], where
the impact of different sensor devices is considered in the latter
work.
In the advent of assessing the predictive power of a particu-
lar quality index, the correlation of recognition accuracy (e.g.,
given regarding EER) and the quality index of the involved
images play a key role [14]. This work presents a novel
Fingerprint Quality Assessment (FQA) based on the analysis
of the Gabor filters response on a fingerprint image. The main
advantages of this approach are:
(i) the detection of low-quality images which can decline
even state-of-art matching approaches;
(ii) EER can be improved by 50% removing less than 6% of
worst quality images;
(iii) additional processing is not required beyond widely used
techniques for fingerprint enhancement.
The rest of this paper is organized as follows. Section
2 gives a brief literature survey of the Fingerprint Quality
Assessment methods, followed by the full description of the
proposed algorithm in Section 3. Experiments and results
are shown in Section 4 and the last section presents our
conclusions from this work.
II. RE LATE D WOR KS
The Fingerprint quality assessment has attracted efforts
from both academic and industrial area. The existing studies
may be classified into three categories: (i) segmentation-based
approaches; (ii) single feature-based approaches; and (iii)
solutions carried out by using multi-feature fusion, which can
be achieved via a linear fusion or classification [15].
Segmentation-based approaches. The approaches of the
first category could be either represent the quality of the
foreground area or to segment foreground from the image
at first. Shen et al. [16] proposed a method based on Gabor
features. Each block of the image is filtered using a Gabor filter
with mdifferent directions. If a block has high quality (i.e.,
strong ridge direction), the responses of some filters are larger
than the others. In poor-quality blocks or background blocks,
the mfilter responses are similar. The standard deviation of
the mfilter responses is then used to determine the quality of
each block (good or poor). The quality index (QI) of the whole
image is finally computed as the percentage of foreground
blocks marked as good. If QI is lower than a predefined
threshold, the image is rejected. Poor-quality images are
additionally categorized as smudged or dry [16].
Yao et al. [17] proposed an approach with minutiae template
only by using convex-hull and Delaunay triangulation. They
are adapted to measure the area of an informative region.
This algorithm is hence dependent on a minutiae extracting
operation.
Single feature-based approaches. Chen et al. [18] esti-
mated the power spectrum ring with Butterworth functions
instead of observing the pixel information directly in the
spectrum image. In [19], Lee et al. reviewed approaches based
on the local standard deviation, the directional contrast of local
block and the Gabor features. A feature was proposed by
analyzing the Fourier spectrum of a fingerprint image. Their
approach depends on the pixels information of the Fourier
spectrum image which is a floating measure for different kinds
of image settings.
Multi-feature fusion methods. The work presented by Lim
et al. [20] is an example of the final category. They compute
the following features in each block: Orientation Certainty
Level (OCL), ridge frequency, ridge thickness and ridge-
to-valley thickness ratio. Blocks are then labeled as good,
undetermined, bad or blank by setting thresholds for the four
features. A local quality score SL is finally computed based
on the total number of good, undetermined and bad quality
image blocks in the image.
The state-of-the-art quality metric, NIST fingerprint image
quality (NFIQ) was proposed by Tabassi et al. [21]. Their
approach employs 11-dimensional feature (exploiting several
characteristics such as ridge orientation flow, local ridge cur-
vature and local contrast) to estimate a matching score and
classify results to five levels through a trained model of a
neural network.
III. FINGERPRINT QUAL IT Y ASS ES SM EN T
The approach proposed in this work is based on an iterative
fingerprint enhancement algorithm presented by Turroni et
al. [8]. However, the presented method requires just some
computations from the first iteration of [8]. The proposed
approach is composed of four main steps: (i) filter-bank con-
volution, (ii) combined image computation, (iii) homogeneity
image combination, and (iv) metric computation. Each step
is detailed in the following sub-sections. Figure 1 shows an
overview of the proposed method.
A. Convolution with Gabor filter-bank
According to Turroni et al. [8], a Gabor filter is defined
by a sinusoidal plane wave (second term in (1)) tapered by a
Gaussian (first term in (1)). The two-dimensional Gabor filter
is defined by:
g(x, y :θ, f ) = exp 1
2x2
θ
σ2
x
+y2
θ
σ2
y·cos(2πf ·xθ)(1)
where θis the orientation of the filter and [xθ, yθ]are the
coordinates of [x, y]after a clockwise rotation by an angle of
(90θ). Such filter depends on four parameters (θ, f, σx, σy),
where σxand σyare the standard deviations of the Gaussian
envelope along the x and y axes, respectively.
A Gabor filter-bank is defined as a set G={gi,j(x, y )|i=
1..no, j = 1..nf}of Gabor filters, where nois the number of
discrete orientations {θi|i= 1..no}and nfis the number of
discrete frequencies {θj|j= 1..nf}.
Let Iah×wfingerprint image, the output of convolution
between Iand the filter-bank Gis a set Vof no·nfimages
where Vi,j = [vi,j
x,y], x = 1..w, y = 1..h denotes the response
image to a filter gi,j with orientation θiand frequency fj. In
this work is used no= 12 and nf= 4. Figure 2 shows an
example of a Gabor filter-bank builded with six orientations
and three frequencies.
B. Combined Image
The set of responses V, obtained after the previous step, is
used to compute a single combined image C= [cx,y], x =
1..w, y = 1..h, where the combination is performed according
to the max filter response:
cx,y =vl,t
x,y (2)
Fig. 1. An overview of the proposed method.
Fig. 2. A Gabor filter-bank with six orientations θ(columns) and three
frequencies f(rows) [8].
where l, t =argmaxi,j {|vi,j
x,y|}i= 1..no, j = 1..nf. Negative
values correspond to ridge regions response, likewise positive
values correspond to a valley region response. As side effect
of this stage, a pixel-level frequency image is also obtained
F={fx,y|x= 1..w, y = 1..h}and orientation image O=
{ox,y|x= 1..w , y = 1..h}.
C. Homogeneity Image
The homogeneity image H={hx,y|x= 1..w, y = 1..h}
encodes the local ridge flow homogeneity. In theory, except for
singularity regions, ridges run smoothly across the fingerprint
pattern, and sudden changes in the orientation and frequency
should not exist. In practice, such discontinuities are deter-
mined by noise or ridge alteration [8]. The homogeneity at
[x, y]is defined as follows:
hx,y =P(p,k)|Cp,k | · Sp,k
P(p,k)|Cp,k |(3)
where p, k =m
2, .., m
2and Sp,k is an orientation homogene-
ity measure defined as:
Sp,k =π
2− |(Ox,y, Op,k )|(4)
(θ1, θ2) =
θ1θ2if π
2θ1θ2<π
2
π+θ1θ2if θ1θ2<π
2
πθ1θ2if θ1θ2≥ −π
2
(5)
Finally, His normalized to fit values in the range [0,1] as
defined by Turroni et al. [8]. Lower values denote a lower
homogeneity.
D. Quality Measure
Aiming to decrease the importance of the pixels in low-
homogeneous regions a pixel-wise multiplication is performed
between Cand H. Let P={px,y|x= 1..w, y = 1..h}be
computed as:
px,y =|cx,y| · hx,y (6)
In this stage, the filter response is what is taken into
consideration, not the fact of the pixel is an edge or a valley.
Therefore, the absolute value is analyzed. Then, the histogram
of P is computed. When a fingerprint image has a good quality,
the values of pare higher. Thus the histogram is denser on the
right side. Otherwise, if the fingerprint image has low-quality,
the histogram is denser on the left side (lower values). This is
shown in Figure 3.
In order to identify this behavior, Skewness is computed.
Skewness is a measure of the asymmetry of the probability
distribution of real-valued variable about its mean. Negative
skew indicates that the tail on the left side is longer or fatter
than the right, and a positive skew indicates the opposite.
Skewness is defined by the following equation:
S=µ3
σ3(7)
where µ3is the third central moment and σis the standard
deviation.
(a) (b)
(c) (d)
(e) (f)
(g) (h)
Fig. 3. Illustration of the analysis of P. The left column are examples
of fingerprint and the right column is the histogram of P computed from
the respective fingerprint image. The images (a) and (c) are better quality
fingerprints than (e) and (g).
The skewness is computed over the resulting histogram of
the previous step. In order to compute the final fingerprint
quality, the negative of skewness is taken. Then, the value
between a predefined range is clipped and normalized from
0 to 100. The higher this value, the better the fingerprint
quality. The Fingerprint Quality Score (FQS) can be defined
as follows:
F QS =
0if S < min
100(S+min)
maxmin if min ≥ −Smax
100 if s > max
(8)
where min and max are the parameters used in the clip
operation. In the experiments, it was used -2 and 0 for min
and max, respectively.
IV. EXP ER IM EN TS A ND RE SU LTS
In order to validate the presented fingerprint quality as-
sessment approach, two different analysis were performed.
First, the correlation between the FQS and the resulting EER
is analyzed. Then, the effect of removing the worst quality
images regarding the final EER is measured. This method can
also be compared to two other well known Fingerprint Quality
Assessment (FQA) methods: NIST Fingerprint Image Quality
(NFIQ) [21] and the commercial VeriFinger SDK’s FQA, de-
veloped by Neurotechnology. Two different commercial SDKs
were used for template extraction and matching: BioPass SDK
from VSoft Tecnologia, and VeriFinger from Neurotechnology.
It was decided to use these SDKs to allow a fair comparison
between FQAs.
A. Database and Protocol
In this work, four databases of Fingerprint Verification
Competition (FVC) [22] with different resolutions have been
used for the experiments: FVC2000DB3, FVC2004DB2,
FVC2004DB3, and FVC2006DB3. These databases were se-
lected since they are commonly used in the literature. In
addition, the quality problems presented in these datasets
are known to affect the performance of automatic fingerprint
identification system (AFIS) [23]. Each dataset is composed of
800 images (100 fingers with 8 samples per finger). However,
the FVC2006DB3 dataset has 140 fingers with 12 samples
per finger, totaling 1680 images. Further details about the
databases can be seen in Table I.
TABLE I
DETAI LS O F DATABAS ES.
DB Sensor Image Dim Resolution
00DB3 Optical 448x478 500 dpi
04DB2 Optical 328x364 500 dpi
04DB3 Thermal 300x480 512 dpi
06DB3 Thermal 400x500 500 dpi
B. Correlation Coefficient Analysis
The matching of genuine samples is known to produce high
scores than fraudulent samples. However, genuine matching
may result in a low score due to some facts as missing common
information, acquisition failure or low-quality images. In order
to increase the method precision, one may decrease the amount
of genuine matching errors (false rejection). This paper pro-
poses to reject these samples by using an own FQA algorithm.
The correlation score is used in order to associate the scores
of a genuine matching and FQS. The Pearson Correlation
coefficient between the matching score and the mean quality
of two templates are computed. Usually, the matching score
between genuine samples will result in a higher value than
fraudulent samples. However, results have suggested that when
the matching between genuine samples have a low value,
Fig. 4. EER removing worst 10% samples and using BioPass SDK.
there is a high correlation between the quality score and the
matching score.
The evaluation of this method first begins by analyzing the
correlation between the scores of matching and the finger-
print qualities (see Table II). The matching between genuine
samples may result in a higher score than fraudulent samples.
Therefore, the lower the quality score, the lower the matching
score.
TABLE II
MEA N OF CO RR ELATI ON C OEFFI CI ENT S BET WE EN TH E FQS AN D THE
MATC HIN G SC ORE .
AFIS FQA 00DB3 04DB2 04DB3 06DB3
BioPass
VeriFinger 0.59 0.48 0.57 0.55
NFIQ 0.52 0.49 0.45 0.23
Our method 0.67 0.59 0.63 0.55
VeriFinger
Verifinger 0.62 0.49 0.60 0.52
NFIQ 0.58 0.54 0.46 0.25
Our method 0.63 0.56 0.57 0.51
The computed mean of the values from Table II for VeriFin-
ger, NFIQ and our approach are 0.55, 0.44 and 0.58, respec-
tively. The results presented show that this method achieved
moderate or strong correlation for all test sets regardless of
the AFIS.
C. Evaluation Over Equal Error Rate
The first step was analyzing the results when 10% of the
worst quality templates were removed by BioPass SDK. The
EER of the proposed method achieved the highest improve-
ment in comparison with baseline results. The performance
was improved from 48.55% (FVC2006DB3) up to 72.37%
(FVC2000DB3). On average, this algorithm decreased EER
by 60.57%, while NFIQ and Verifinger decreased by 41.61%
and 52.91%, respectively. Figure 4 compares the results of
each method for all the four datasets.
Removing the 10% of worst quality templates with Ver-
iFinger SDK, the presented algorithm achieved better perfor-
mance in 3 out of 4 databases. As can be seen in Figure
5, the EER was improved from 63.83% (FVC2006DB3) to
Fig. 5. EER removing worst 10% samples and using VeriFinger SDK.
Fig. 6. EER removing worst 20% samples and using BioPass SDK.
74.75% (FVC2004DB3). In average, the EER was improved
by 69.72% through this method and by 53.91% and 66.20%
using NFIQ and VeriFinger, respectively. Finally, it can be seen
that the presented FQA method is better than the VeriFinger
even when the VeriFinger SDK is used to template matching.
When the 20% of worst quality templates are removed by
BioPass SDK, the EER was able to be improved by 74.40%
on average for all datasets. Figure 6 shows that the best
improvement was achieved in FVC2000DB3 (83.86%), while
the lowest was in FC2004DB2 (67.99%.). The NFIQ and
VeriFinger algorithm decreased EER by 62.42% and 60.38%,
respectively.
The proposed method has improved the EER from 77.67%
(FVC2006DB3) to 85.63% (FVC2000DB3) with an average
improvement of 80.33%. Otherwise, the NFIQ and VeriFinger
methods decreased the EER by 64.65% and 81.14%, respec-
tively.
In Table III, the percentage of templates required to be
removed was measured in order to improve EER performance
by a certain factor. It can be noticed that this method removes
only 3.61% of worst templates in order to achieve an EER
improvement of 40%. To decrease EER by 80%, the NFIQ
Fig. 7. EER removing worst 20% samples and using VeriFinger SDK.
TABLE III
PER CEN TAGE OF R EM OVE D TEM PLAT ES BY E ACH SDK T O ACHI EV E A
GIVEN EER IM PROV EME NT.
EER Improvement NFIQ VeriFinger Our Approach
40% 5.71% 4.96% 3.61%
50% 8.84% 8.31% 5.68%
60% 8.62% 12.82% 8.26%
70% 27.98% 24.63% 10.41%
80% 38.19% 35.89% 22.43%
and VeriFinger algorithm had to remove more than 35% of
templates with low-quality, while this method removed only
22.43%. Therefore, these results showed the method proposed
is able to identify the images which most affect the error rates
of an AFIS for both SDK’s.
V. CONCLUSION
This work presented a new Fingerprint Quality Assessment
method to reduce EER. The method is evaluated on four
FVC Datasets and compared with others FQAs algorithms
(VeriFinger and NFIQ). Our results showed that genuine
comparisons from low-quality templates usually have low
scoring, yielding a false rejection result. When 10% of the
worst-quality templates were removed from the dataset by
our method, the EER was decreased in comparison to all the
methods analyzed.
For future works, we first intend to analyze the performance
with other filter banks, metrics, and SDKs (e.g., NBIS SDK)
presented in the literature. We may also try extending our
method, developing a local quality measure to be computed at
the template generation.
ACKNOWLEDGMENT
The authors would like to thank to Vsoft for the support
given to the research team during the process. This work was
conducted during a scholarship supported by CAPES at the
Federal University of Campina Grande.
REFERENCES
[1] H. T. F. Rhodes, Alphonse Bertillon, father of scientific detection.
Abelard-Schuman, 1956.
[2] A. A. Ross, K. Nandakumar, and A. K. Jain, Handbook of Multibiomet-
rics, 1st ed. Springer Publishing Company, Incorporated, 2011.
[3] D. Maltoni, D. Maio, A. K. Jain, and S. Prabhakar, Handbook of
Fingerprint Recognition, 2nd ed. Springer Publishing Company,
Incorporated, 2009.
[4] A. K. Jain, P. Flynn, and A. A. Ross, Handbook of Biometrics. Berlin,
Heidelberg: Springer-Verlag, 2007.
[5] J. Fierrez-Aguilar, L. M. Munoz-Serrano, F. Alonso-Fernandez, and J. A.
Ortega-Garc´
ıa, “On the effects of image quality degradation on minutiae-
and ridge-based automatic fingerprint recognition,” Proceedings 39th
Annual 2005 International Carnahan Conference on Security Technol-
ogy, pp. 79–82, 2005.
[6] C. R. Blomeke, S. J. Elliott, B. Senjaya, and G. T. Hales, “A comparison
of fingerprint image quality and matching performance between health-
care and general populations,” in Biometrics: Theory, Applications, and
Systems, 2009. BTAS’09. IEEE 3rd International Conference on. IEEE,
2009, pp. 1–4.
[7] D. Petrovska-Delacr´
etaz, G. Chollet, and B. Dorizzi, Fingerprint recog-
nition. Springer, 2009, pp. 51–88.
[8] F. Turroni, R. Cappelli, and D. Maltoni, “Fingerprint enhancement
using contextual iterative filtering,” in Biometrics (ICB), 2012 5th IAPR
International Conference on. IEEE, 2012, pp. 152–157.
[9] P. Grother and E. Tabassi, “Performance of biometric quality measures,
IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 4, pp. 531–543, Apr.
2007. [Online]. Available: http://dx.doi.org/10.1109/TPAMI.2007.1019
[10] F. Alonso-Fernandez, J. Fierrez, J. Ortega-Garcia, J. Gonzalez-
Rodriguez, H. Fronthaler, K. Kollreider, and J. Bigun, “A comparative
study of fingerprint image-quality estimation methods,” IEEE Trans.
on Information Forensics and Security, vol. 2, no. 4, pp. 734–743,
December 2007.
[11] H. Fronthaler, K. Kollreider, J. Bigun, J. Fierrez, F. Alonso-Fernandez,
J. Ortega-Garcia, and J. Gonzalez-Rodriguez, “Fingerprint image quality
estimation and its application to multi-algorithm verification,” IEEE
Trans. on Information Forensics and Security, vol. 3, no. 2, pp. 331–338,
June 2008.
[12] A. Sankaran, M. Vatsa, and R. Singh, “Automated clarity and quality as-
sessment for latent fingerprints,” in IEEE Sixth International Conference
on Biometrics: Theory, Applications and Systems, BTAS 2013, Arlington,
VA, USA, September 29 - October 2, 2013, 2013, pp. 1–6.
[13] J. D. S. Kiltz and C. Vielhauer, “Automated clarity and quality assess-
ment for latent fingerprints,” in Proceedings of the 2nd International
Workshop on Biometrics and Forensics (IWBF14). IEEE, 2014.
[14] J. Hammerle-Uhl, M. Pober, and A. Uhl, “Systematic evaluation method-
ology for fingerprint-image quality assessment techniques,” in Informa-
tion and Communication Technology, Electronics and Microelectronics
(MIPRO), 2014 37th International Convention on. IEEE, 2014, pp.
1315–1319.
[15] Z. Yao, J.-M. Le Bars, C. Charrier, and C. Rosenberger, “Literature
review of fingerprint quality assessment and its evaluation,” IET Bio-
metrics, vol. 5, no. 3, pp. 243–251, 2016.
[16] L. Shen, A. Kot, and W. Koo, “Quality measures of fingerprint images,
in International Conference on Audio-and Video-Based Biometric Person
Authentication. Springer, 2001, pp. 266–271.
[17] Z. Yao, C. Charrier, C. Rosenberger et al., “Quality assessment of
fingerprints with minutiae delaunay triangulation,” in Information Sys-
tems Security and Privacy (ICISSP), 2015 International Conference on.
IEEE, 2015, pp. 315–321.
[18] Y. Chen, S. C. Dass, and A. K. Jain, “Fingerprint quality indices for
predicting authentication performance,” in International Conference on
Audio-and Video-Based Biometric Person Authentication. Springer,
2005, pp. 160–170.
[19] B. Lee, J. Moon, and H. Kim, “A novel measure of fingerprint image
quality using the fourier spectrum,” in Biometric Technology for Human
Identification II, vol. 5779. International Society for Optics and
Photonics, 2005, pp. 105–113.
[20] E. Lim, X. Jiang, and W. Yau, “Fingerprint quality and validity analysis,
in Image Processing. 2002. Proceedings. 2002 International Conference
on, vol. 1. IEEE, 2002, pp. I–I.
[21] E. Tabassi, C. Wilson, and C. Watson, Fingerprint Image Quality.
National Institute of Standards and Technology, 2004. [Online].
Available: https://books.google.com.br/books?id=pNd0nQAACAAJ
[22] D. Maio, D. Maltoni, R. Cappelli, J. L. Wayman, and A. K. Jain,
“Fvc2004: Third fingerprint verification competition,” in Biometric Au-
thentication, D. Zhang and A. K. Jain, Eds. Berlin, Heidelberg: Springer
Berlin Heidelberg, 2004, pp. 1–7.
[23] K. Phromsuthirak and V. Areekul, “Fingerprint quality assessment using
frequency and orientation subbands of block-based fourier transform,”
in Biometrics (ICB), 2013 International Conference on. IEEE, 2013,
pp. 1–7.
... Fingerprint Image Quality Assessment (FIQA) has attracted efforts from both academic and industrial areas. The existing studies can be classified into three main types of approaches [17][18][19], namely, (a) single feature approaches, (b) segmentation approaches, and (c) multi-feature approaches. ...
... 2022, 12, x FOR PEER REVIEW 2 of 10 Fingerprint Image Quality Assessment (FIQA) has attracted efforts from both academic and industrial areas. The existing studies can be classified into three main types of approaches [17][18][19], namely, (a) single feature approaches, (b) segmentation approaches, and (c) multi-feature approaches. ...
Article
Full-text available
The assessment of fingerprint image quality is critical for most fingerprint applications. It has an impact on the performance and compatibility of fingerprint recognition, authentication, and built-in cryptosystems. This paper developed an improved fingerprint image quality assessment derived from the image power spectrum approach and combined it with the Prewitt filter and an improved weighting method. The conventional image power spectrum approach and our proposed approach were implemented for accuracy and reliability tests using good, faulty, and blurred fingerprint images. The experimental results showed the proposed algorithm accurately identified the sharpness of fingerprint images and improved the average difference in FIQMs to 61% between three different levels of blurred fingerprints compared with that achieved by a conventional algorithm.
Conference Paper
Full-text available
This article proposes a new quality assessment method of fingerprint, represented by only a set of minutiae points. The proposed quality metric is modeled with the convex-hull and Delaunay triangulation of the minu-tiae points. The validity of this quality metric is verified on several Fingerprint Verification Competition (FVC) databases by referring to an image-based metric from the state of the art (considered as the reference). The experiments of the utility-based evaluation approach demonstrate that the proposed quality metric is able to generate a desired result. We reveal the possibility of assessing fingerprint quality when only the minutiae template is available.
Conference Paper
Full-text available
Research has shown for some age groups, quality of fingerprints can impact the performance of biometric systems. A desirable feature of biometrics is that they are suitable for use across the population. This applied study examines the performance of a fingerprint recognition system in a healthcare environment. Anecdotal evidence suggested front line healthcare workers may have lower image quality due to continued hand washing which may remove oils from their skin. During training, individuals are told to add oil to their fingers by wiping oil from their foreheads to improve the resulting quality of the fingerprints. In the healthcare population the authors tested, compared to two general populations (collected on optical and capacitance sensors) there was a significant difference in skin oiliness, but not in image quality. There was a difference across healthcare and non-healthcare groups in the performance of the fingerprint algorithm when compared against the capacitance dataset.
Article
Fingerprint quality assessment (FQA) has been a challenging issue due to a variety of noisy information contained in the samples, such as physical defect and distortions caused by sensing devices. Existing studies have made efforts to find out more suitable techniques for assessing fingerprint quality but it is difficult to achieve a common solution because of, for example, different image settings. This paper gives a twofold study related to FQA, including a literature review of the prior work in assessing fingerprint image quality and the associated evaluation approaches. First, we categorized some representative studies proposed in last few decades to show how this problem has been solved so far. Second, the paper gives a brief introduction of the associated evaluation approaches, and then contributes an extended evaluation framework based on the enrollment selection, which offers repeatable and statistically convincing measures for evaluating quality metrics. Experimental results demonstrate the usability of the proposed evaluation framework via offline trials.
Conference Paper
We propose a systematic methodology to assess fingerprintimage quality indices. Our approach is able to identify specific weaknesses or strengths of quality indices with respect to certain types of quality impairments. For that purpose, we suggest to employ a tool able to simulate a wide class of acquisition conditions and qualities, applicable to any given dataset and also of potential interest in forensic analysis. As an example for such a tool, StirMark image manipulations (as being developed in the context of watermarking robustness assessment) are applied to fingerprint data in this work, to generate corresponding test data, thereby interpreting certain image manipulations as being highly related to realistic fingerprint acquisition conditions. Experimental results document the usefulness of the approach and report significant inter-data set and inter-fingerprint matcher variability for the quality indices investigated.
Conference Paper
Clarity of a latent impression is defined as the discern-ability of fingerprint features while quality is defined as the amount (number) of features contributing towards matching. Automated estimation of clarity and quality at local regions in a latent fingerprint is a research challenge and has received limited attention in the literature. Local clarity and quality helps in better extraction of features and assessing the confidence of matches. The research focuses on (i) developing an automated local clarity estimation algorithm, (ii) developing an automated local quality estimation algorithm based on clarity, and (iii) understanding the correlation between clarity and quality in latent fingerprints. Local clarity assessment is performed using a 2-D linear symmetric structure tensor. The goodness of orientation field is proposed to estimate the local quality of a latent fingerprint. Experiments on the NIST SD-27 database show that incorporating local clarity information in the quality assessment improves the performance of the matching system.
Conference Paper
A new fingerprint quality measure based on subband analysis of spatial-frequency domain is presented. The overlapped block-based Fourier transform is projected onto a new subband platform, which is related to frequency and orientations of ridges. Based on this platform, three proposed intrinsic spectral characteristics for fingerprint quality assessment can be extracted, i.e. global spectrum, relative spectral density, and relative spectral orientation continuity. The proposed method has an advantage of evaluating ridge spectrum directly, without suffering from incorrect parameter estimation such as directional field. The experimental results show that the discrimination capability of the proposed method is superior to conventional methods, especially those that employ thermal sensors.
Conference Paper
The performance of Automatic Fingerprint Identification Systems (AFIS) relies on the quality of the input fingerprints, so the enhancement of noisy images is a critical step. We propose a new fingerprint enhancement algorithm that selectively applies contextual filtering starting from automatically-detected high-quality regions and then iteratively expands toward low-quality ones. The proposed algorithm does not require any prior information like local orientations or frequencies. Experimental results over both real (FVC2004 and FVC2006) and synthetic (generated by the SFinGe software) fingerprints demonstrate the effectiveness of the proposed method.
Article
The purpose of this study is to quantitatively analyze the effect of fingerprint image quality on the performance of fingerprint recognition, in order to improve the identification capability of fingerprint recognition systems. In addition, this study proposes a new measure of fingerprint image quality using the Fourier spectrum. The proposed method measures the fingerprint image quality based on the global characteristics of the image. The experimental results demonstrate that the proposed algorithm shows better performance in the quality classification of fingerprint images than the existing algorithms, and this leads to a 6% improvement in the false rejection rate.
Conference Paper
In an automatic fingerprint identification system, it is desirable to estimate the image quality of the fingerprint image before it is processed for feature extraction. This helps in deciding on the type of image enhancements that are needed and in deciding on thresholds for the matcher in the case that dynamic thresholds are used. In this paper, we propose a Gabor-feature based method for determining the quality of the fingerprint images. An image is divided into Nw x w blocks. Gabor features of each block are computed first, then the standard deviation of the M Gabor features is used to determine the quality of this block. The results are compared with an existing model of quality estimation. Our analysis shows that our method can estimate the image quality accurately.