ArticlePDF Available

A Review of Literature on Iris Recognition

Authors:

Abstract and Figures

This paper presents a survey of literature related to the one of the biometric recognition systems-iris recognition system. Biometric authentication has become one of the important security technologies due to the prominent properties of biometrics compared to other authentication methods. Since most of the phenotypes of humans are unique, physiological traits like fingerprints, iris color, face patterns and geometries considered as security passwords. Among those, iris gets the most attention in authentication because of it is reliability. Even the iris textures which will be used in iris recognition are not similar in the left and right eyes of the same person making iris recognition more secure than popular face recognition. The aim of this paper is to explore recent developments in iris recognition systems and algorithms behind them.
Content may be subject to copyright.
A Review of Literature on Iris Recognition
Samitha Nanayakkara
Department of Information Technology
Faculty of Humanities and Social Sciences
University of Sri Jayewardenepura
Gangodawila, Nugegoda 11250, Sri Lanka
Prof. Ravinda Meegama
Department of Computer Science
Faculty of Applied Sciences
University of Sri Jayewardenepura
Gangodawila, Nugegoda 11250, Sri Lanka
Abstract—This paper presents a survey of literature related to
the one of the biometric recognition systems - iris recognition
system. Biometric authentication has become one of the impor-
tant security technologies due to the prominent properties of
biometrics compared to other authentication methods. Since most
of the phenotypes of humans are unique, physiological traits like
fingerprints, iris color, face patterns and geometries considered
as security passwords. Among those, iris gets the most attention
in authentication because of it is reliability. Even the iris textures
which will be used in iris recognition are not similar in the left
and right eyes of the same person making iris recognition more
secure than popular face recognition. The aim of this paper is
to explore recent developments in iris recognition systems and
algorithms behind them.
Index Terms—iris recognition, biometrics authentication, im-
age processing, machine learning
I. INTRODUCTION
Iris recognition is one of the emerging areas in security
and have received more attention in recent years. One of
the recent challenges of this biometric authentication was on
how to apply biometric security solutions when biological
characteristics of humans are rapidly changing. This was a
considerable problem and had to adopt numerous mathemati-
cal and machine learning models to predict such characteristics
in advanced to perform better authentication. But one of the
unusual feature of iris is that it is stable in person’s life span
[1]. This makes iris recognition to become more popular in
security industry.
Iris is a thin, circular structure in the eye, which is respon-
sible for controlling the size of the pupil. It mainly consists
of few components as Fig.1 shows.
Fig. 1. The anatomy of eye
Cornea is the transparent front part of the eye that covers the
iris, pupil, and anterior chamber which is the aqueous humor-
filled space inside the eye where iris is pigmented muscular
curtain which consists of the unique patterns. Since Cornea is
not a barrier to iris scanning, automated machines can clearly
read iris patterns and use step by step mechanism to extract
features for authentication purposes.
Fig. 2. Iris Recognition Process
As shown in Fig.2, Iris image acquisition stage captures rich
detailed image of unique structure of iris using sophisticated
technology while illuminating eye by near infrared wavelength
light because that reveals rich information even in darker
irises [2]. After obtaining the image, pre-processing of image
will carried out to do edge detection, adjustments of contrast
of the image. Then use mathematical and statistical models
to identify the part of iris in image and this process calls
iris Segmentation. Next stage, normalization transform the
iris image from Cartesian coordinates to Polar coordinates in
order to allow comparisons [2]. All the other problems like
low contrast can be corrected using image enhancement algo-
rithms. Then in feature extraction process, extract bit patterns
containing the information/features to identify whether two
feature templates are matching. It may use texture analysis
method to extract such features. Extracted information will
be stored as feature templates for matching purposes in a
database. In comparison if the distant calculation is below than
the threshold value, decision with higher confident level will
be considered. Databases which consist of image templates
will be used by image matching engines to carry out millions
feature templates per second comparison speed with nearly
zero false matching rates [2].
In this paper, each stage of the iris recognition will be
discussed separately to thoroughly identify issues of proposed
methods and compare them to find how researchers have
adopted different technologies to address ongoing problems.
II. RE LATE D WO RK
In related work, most of the current iris recognition systems
and their algorithms will be covered.
A. Image Acquisition
Iris image will be captured with highly sensitive equipment.
Thus, it will guarantee the quality of captured image. This
phase is mere an image capture. But it is necessary to ensure
that the method can overcome obstacles that frequently find
in acquisition process which are blurry images,camera diffu-
sion,noise,light reflection and other things which may affects
the segmentation process [3].
B. Image Segmentation
Iris recognition algorithm starts to work from this phase
onward. Segmentation approximate the iris by two circles.
One circle approximates iris boundary and second circle ap-
Fig. 3. Iris Localization
proximates pupil boundary. Level of success of segmentation
depends on the quality of the image that camera acquires. If
the camera was sophisticated iris scanning camera which uses
NIR (Near Infrared) wavelengths to capture iris image will
have less disturbances like specular reflections. But if it has
captured the image under natural light, there will be specular
reflections and affect to have less features. Also, some people
have darkly pigmented irises and will be much harder to
separately identify iris region and pupil region because of the
low contrast. [4] Before capturing the iris with approximated
bounds, iris should be localized. There are plenty of methods
being used for this particular function and most popular ones
are listed and discussed below.
1) Hough Transform: One of famous computer vision
algorithms which uses to identify different kind of shapes
in an image such as squares, circles. Since iris segmentation
deals with identifying two circular objects in 2D image, more
specifically Circular Hough transformation can be adopted [5].
First edge detector should be applied to the image which
already has been converted to grey scale and calculate first
derivatives of the intensities of the image with thresholding
result to make the edge map. After, “voting procedure” will
be carried out to cast votes to Hough Space. These votes will
be used as the parameters of the circle which passing through
each edge point [5].
x2
c+yn
cr2= 0 (1)
Parameters xcand ycin Equation 1 denotes center coordinates
and r is the radius. Maximum Center coordinates with radius
define as the best fitted contour. Since this detection still
Fig. 4. Original image of eye and different edge maps
having the disturbance of eyelids, wildes et al. [6] uses two
edge maps. Horizontal edge map for detection of eyelids
and vertical edge map for detecting iris boundary. Reason
behind obtaining such bias derivatives for horizontal edge map
to detect eyelids is, eyelids are normally in the horizontal
direction. If horizontal edge map used to identify iris boundary,
there will be an effect from eyelids. So, vertical edge map will
be used for iris boundary identification to reduce the influence
of eyelids.
Even though Hough Transform is a popular method,still it
has disadvantages. It is not suitable for real time applications
since it’s requirement of computational power is high. Also,
Hough transform needs to generate a threshold value to filter
results. Calculation of threshold value and its selection may
cut off required points in the edge map and will lead to false
circle identification.
2) Dougman’s Algorithm: In Dougman’s algorithm, it uses
integro-differential Operator to successfully approximate inner
and outer boundaries of iris and upper and lower eyelids [2].
max
(r,x0,y0)
Gσ(r)
∂r I(r,x0,y0)
I(x, y)
2πr ds
(2)
Here I (x, y) is the intensity of eye image’s x and y coor-
dinates, r defines as the radius, Gaussian smoothing function
also will be used here as function of G, s denotes the contour
of the circle given by radius [2]. This operator starts from one
point and will calculate partial derivatives of intensities of the
circles which are having variable radius. So, it tries to find
the global minimum in the image. After using the Gaussian
function, can identify the positions of inner and outer circles
using minimum difference and also the radius. Since this still
have a problem with eyelids occlusion, it uses modification
to transform circular to parabolic detection. That will identify
the iris part more accurately.
Dougman’s Algorithm is less computationally intensive
than Hough transform because it uses first derivative and no
requirement of calculating threshold value. Thus this still uses
for commercial grade security systems.
3) Canny and Sobel edge detection : In [7], both Canny and
Sobel edge detection will be considerd and select which will
perform better. First in Sobel edge detection, an operator will
be selected for each point of the given iris image and it will
generate the corresponding gradient vector. This operator uses
two 3x3 matrix kernels which are convolved with the original
image to calculate the approximations of the derivatives [7].
However, if A is considered as the source image which
means the iris image, Wxand Wyhas to be defined as
images which contain horizontal and vertical derivatives of
the approximations.
Wx=
+1 0 1
+2 0 2
+1 0 1
A(3)
Wy=
+1 +2 +1
000
121
A(4)
* denotes 2-dimensional convolution operation.
Unlike Sobel, Canny edge detection has several steps.
Which are smoothing,finding gradient,non-maximum sup-
pression, thresholding and determine respectively. In first
step,noise of the image will be removed by blurring the iris
image. Then after operator is identifying the large magnitude
of gradient of image, it marks edges. After completion of pre-
vious step, operator will mark local maxima as edges. Before
determine edges, thresholding will be used to identify strongest
edges and determination stage will remove weak edges which
are not connected to any strong ones.As researcher explains in
[7], Canny operator is optimum even for noisy images and has
better performance than Sobel. But Sobel has the advantage
of detect both horizontal and vertical edges separately. Which
makes Sobel relatively more cost effective than Canny.
4) Eyelids and eyelashes detection: Eyelids and eyelashes
are noise factors in eye image and may cover iris region. There
are several methods to detect eyelids and eyelashes including
methods we discussed earlier. However in [8],researchers are
proposing a new method of using histogram equalization (HE)
and adaptive equalization histogram (AHE) before carrying out
image segmentation. By using HE in acquired image, it may
increase the overall contrast of given image by distributing the
intensity values. Thus, it will introduce a problem which is that
contrast enhancement will affect the overall brightness. This
ultimately may result in low/excessive saturation in some parts
of the image. To overcome that issue, AHE will be used and it
will increase contrast locally or in certain places. This is based
on equalization of histograms from smaller areas [9]. This
will help Canny operator to efficiently identify iris neglecting
noises. In [10] also canny operator has been used for edge
detection. As this researcher highlights, canny technique is
nearly perfect and robust compared to other mechanisms.
First it blurs out the image to smooth the image and remove
noise came from the camera. For that, Gaussian filter will use
there. Then to find the edge where intensity is high, Sobel
operator will be used since Sobel operator is very powerful to
find gradient of an image. After getting the gradient, blurred
edges will convert to sharp ones and using threshold value,
potential/strong edges can be determined [10]. By using BLOB
(Binary Large Object) analysis, strong edges will be selected
as final edges. Finally, linear Hough transform will use to
identify noise factors such as eyelids and lashes and Circular
Hough transform will detect iris region like in Fig 5.
Fig. 5. Result after removal of eyelids and lashes from original image [10]
5) Fuzzified Image Enhancement: Rather than proposing a
new method for segmentation, this researcher propose a new
image enhancement mechanism which will help to segment
the iris accurately even with occlusions. Since Fuzzy Image
enhancement works without deep learning and performance
better than traditional methods in image processing, [11] seeks
to find a new approach combing fuzzy image enhancement
with deep learning and use it to correctly identify iris in
segmentation phase.
This method will initiate by enhancing the features of iris
image. Thus will help to efficiently identify iris and pupil
regions. Applying fuzzy image filters at the end will blur non-
relevant parts of the image. Triangular Fuzzy Average Filter,
Triangular Fuzzy Median Filter and Gaussian Filter will be
used to smooth noise and enhance edges after well-known
Hough Transform detects the iris. [11].
In the Convolutional Neural Network phase, they use typical
architecture of CNN and train it to extract the features and
parallelly train a Capsule network (CN) for the same purpose.
Fig. 6 and Fig. 7 shows theirs architectures respectively. This
researcher has trained all together four networks which are
CNN, F-CNN, Capsule Network and F-Capsule. F-CNN and
F-Capsule refer to CNN and Capsule Network training on
fuzzified images.
6) Deep learning based iris segmentation: Another inter-
esting research has been conducted by M.Trokielewicz et al.
[12] which is about using deep-learning based iris segmenta-
tion in post-mortem. This topic was initially proposed back
in 2015 and since then researchers were working hard on
Fig. 6. Proposed CNN in [11]
Fig. 7. Proposed Capsule Network in [11]
deceased subject’s iris evaluation. However M.Trokielewicz
et al. claims their proposing method is promising even than
algorithms use in commercial products and trying to develop
a drop-in replacement for the Daugman method recognition
pipeline [12].
Initiation happens with data-driven segmentation model
based on a deep convolutional neural network which uses
to localize the iris image and this model has been trained
with standard methods of machine learning such as cross
validation with training images to minimize the over-fitting.
As researcher already mentioned, to make this method a drop-
in replacement, normalization should be carried out efficiently
and accurately. Thus, segmentation has to be correctly nor-
malized onto a dimensionless polar-coordinate rectangle [12].
To tackle this issue, they have followed the same procedure
introduced by H.Hofbauer et al. in [13] research. They have
introduced a method which will to parameterize segmentation
carryout by a CNN model and bridging the gap between CNN
based segmentation and the rubbersheet-transform. So, after
this masking process, rubbersheet-transform can be directly
used for the normalization.
7) Results: In the Segmentation stage, automation iris seg-
mentation proved to be successful [4]. L.Masek et al. [4] have
taken two iris databases CASIA and LEI and iris segmentation
algorithms managed to segment iris and pupil regions success
rate of 82% and 62% respectively. Still there were problematic
images in the database which having little intensity differences
between pupil boundary and iris boundary causes to wrongly
identify them. In Fig 8, this intensity difference leads canny
Fig. 8. There is a white point in iris boundary [4]
edge detection method to falsely identify edges. So, researcher
explains that proper parameter setup can remove these prob-
lems.
In [11], researcher has show that their fuzzified images
with deep learning capsule network has achieved best results
compared to other methods according to Fig.9. As in practical
applications, it is difficult to capture iris when glasses at pres-
ence and because of various lighting conditions, etc. Applying
the fuzzified image enhancement can help the training process
and training outcome (accuracy), and ultimately better adapt.
Fig. 9. Results of [11]
Unlike other research proposals, [12] has completely took
the advantage of prominent features of Convolutional Neural
Networks to accurately and efficiently do the segmentation
process. According to Fig. 10, proposed method has shown
lower EER rate than other algorithms even being used in
commercial products such as IriCore.
Fig. 10. Comparison results of [12]
C. Image Normalization
When capturing the image from camera there can be various
factors affected. The distance to the eye from camera will not
be unique every time. So, that may affect the size of the pupil
and the iris image. That will cause texture deformation of the
iris and can be lead to poor performance in feature extraction
and feature matching stages. Also, rotation of eyes, rotation
of camera may lead to such problems as well. Due to these
various factors iris has to be normalized to compensate.
1) Dougman’s Rubber Sheet Model: A model that proposed
by John Daugman remaps pixels in iris region from Cartesian
coordinates to Polar coordinates [2]. It uses polar coordinates
where is the angle from 0 to 2 and r is the radius which
changes from 0 to 1.
Fig. 11. Rubber sheet model mapping
I(x(r, θ), y(r, θ)) I(r, θ)
where
x(r, θ) = (1 r)xp(θ) + rxl(θ)
y(r, θ) = (1 r)yp(θ) + ryl(θ)
(5)
This remap of iris region can be modelled using Equation.5
where (x, y) are Cartesian coordinates of iris image, and are
Cartesian coordinates of pupil boundary along direction and
iris boundary along direction. In this method iris region can
be mapped to a single plane sheet to allow comparisons.
Also, have to note that this rubber sheet model does not
compensate for rotation variance but for pupil dilation, image
distance inconsistencies and non-centric pupil displacements.
In the normalized iris image as in Fig. 11, r’s direction will
call as radial direction and direction of as angular direction.
Still almost all researches use rubber-sheet model for iris
normalization. But there are other researches carried out to
find new mechanisms.
2) Image Registration technique: In [14], researcher in-
troduces a new method to transform the iris texture from
cartesian to polar coordinates also known as iris unwrapping.
This method has four steps which are Feature detection,
Feature matching, Transform model estimation and Image re-
sampling and transformation. Initially, a mapping function has
to be selected.Thus method initiate by geometrically wrapping
segmented image Ia(x,y) into alignment with a image picked
from the database Id(x,y). Mapping function (u(x,y),v(x,y))
which uses to convert original to the polar coordinates has
to be selected by minimizing the Equation.6.
Z Z (Id(x, y)I(xu, y v))2dxdy (6)
Secondly, Phase Correlation Method will be introduced
which is based on Fourier shift property. Thus this will
generate linear phase differences by transforming a shift in the
coordinate frames of two functions. However using these two
methods and according to Image Registration technique steps,
researcher proves their method is even better than Dougman’s
Rubber Sheet Model.
Phase correlation method efficiency results in Fig. 12 prove
researchers claim.
3) Image Enhancement: Low contrast of the image as well
as non-uniform illumination may cause poor performance in
Fig. 12. Verification results of Image registration and Rubber Sheet method
[14]
feature extraction stage. To avoid these factors normalized
image must be enhanced using different techniques to com-
pensate.
Fig. 13. (a) Before Enhancement (b) after enhancement
One of the techniques is using Local histogram analysis
which make uniformly illumination image with rich detailed
normalized image [15], [16]. Before enhancing the image
Fig.13 (a) details can identify poorly. But after enhancement,
rich details of the image are visible to do a better feature
extraction. But still there can be issues because of reflections.
These can be removed by simple thresholding operation as
[17] explains.
4) Results: Even new researchers are using Dougman’s
Rubber Sheet Model for normalization and it is also a widely
accepted method in industry as well. Anyway new method
Image Registration technique [14] has proved that their method
is efficient than Dougman’s method. However, normalization
process address problems of pupil dilation but may not be
accurate all the time because pupil dilation may influence
perfectly reconstructs surface patterns. So still researches can
be conducted in this area to make this process more robust.
D. Feature Extraction
Feature extraction is the next stage where unique features
of normalized image will be extracted and store as biometric
template. There only significant features should be encoded
to make a proper comparison with higher confidence between
two templates. When it comes to compare two templates, there
are main two comparison classes. Comparing two templates
which made from different irises come under inter-class com-
parisons and there should provide one range of values. When
comparing two templates from same iris, called intra-class
comparison should provide different range of values [4].
1) Laplacian of Gaussian Filters: The Laplacian is a two
dimensional isotropic estimation of image’s second spatial
derivative [18] and it also can be employed to extract informa-
tion from normalized iris image by decomposing iris region
[19].
G=1
πσ41ρ2
2σ2eρ2/2σ2
(7)
is the radial distance and denotes standard deviation of
Gaussian [4]. Using Equation.7 ,generated image call as
Laplace Pyramid. Laplace pyramid is also constructed using
4 levels and will be used to generate iris template.
2) Gabor filters: In [2], Daugman uses 2D Gabor filter
to extract features from normalized image. Gabor filter is
constructed using sine and cosine waves with Gaussian. This
method is used to optimally combine localization of space and
frequency. Here sine wave is optimally localized in frequency
and poorly localized in space. So, sine modulation with
Gaussian provides localization in space. This filter has real and
imaginary components to represent orthogonal components
also knows as even symmetric and odd symmetric respectively.
Daugman uses 2D Gabor filter in image domain (x, y) as in
Equation.9 to extract features.
G(x, y) = eπ[(xx0)22+(yy0)22]e2π i[u0(xx0)+v0(yy0)]
(8)
Here (u0,v0) denotes modulation and partial frequency de-
fines in Gabor filter denotes by,
ω0=qu2
0+v2
0(9)
(x0,y0) denotes the location in the image and denotes effective
width and length [4].
Fig. 14. Odd Symmetric 2D Gabor filter on the left and even Symmetric
Gabor filter on the right [4]
Each pattern separates to extract its information use these
two Gabor filters. As shown in Fig.15, 2D iris pattern is
divided into several 1D signals and then 1D signal convolve
with 1D Gabor filter to get the response. Using Odd and
even symmetric Gabor filters can obtain real response and
imaginary response and then this phase information quantifies
into four possible quadrant levels in complex plane. Four can
be represented in 2 bits. So, there are 2 bits to represent
levels. Each pixel of normalized iris image represents by 2
Fig. 15. Process of Feature Extraction using Gabor Filter [4]
bits in iris template and generate 2,048 bits for the template.
Also, masking bits will be generated for corrupted areas of
iris pattern. Noise area intensities are calculated by averaging
intensity levels of near bits to minimize influence. Generally,
this creates 265 bytes of template.
Here it extracts phase information rather than information
of amplitude because it provides significant information of
patterns without depending on external factors such as illumi-
nation, contrast.
3) Hilbert transform: In [20], researcher introduce a new
method instantaneous-phase and/or emergent-frequency and it
uses Hilbert Transformation to extract information from iris
image. Using complex mathematical models, analytical image
can be extracted which is constructed by original image and
Hilbert transform combinely. In instantaneous phase, using
analytic image, real and imaginary parts will be identified
while emergent-frequency will be formed by three differ-
ent dominant frequencies [20]. Finally similar mechanism to
Daugman’s system by thresholding will be used to calculate
feature vector. As researcher claims, this approach is compu-
tationally effective.
4) Wavelet transform: Plenty of wavelet transformations
are being used for feature extraction including, Haar wavelet
[21], Fast wavelet [22], Hat wavelet and biorthogonal wavelet.
Fast wavelet transform is designed to convert signal or wave-
form which in the time domain to a sequence of coefficients
based on an orthogonal basis of small finite waves, or wavelets
[22]. Initially this method starts by reading the greyscale image
which is converted to predefined width and height. Then a
linear array will be used to store the pixel values and start to
perform fast wavelet transform encoding.
As researcher claims, this method is better than other
methods because of it’s less complexity and computation speed
is high [22].
Among other methods of feature extraction, wavelets trans-
forms are being used over fourier transforms due to the fact
that it uses space and frequency resolutions.
5) Local Binary Pattern: LBP or Local Binary Pattern is
a type of visual descriptor used for classification. in [23],
this method will be used to generate LBP image and extract
features by chunked encoding method. LBP operator is simple
yet very efficient texture operator which go through each and
every pixels and label them by thresholding eight neighbours
of pixels relative to center pixel. Then center pixel value will
be calculated multiplying the converted neighborhood pixel
values with weights to 2n[24].
Fig. 16. The basic LBP operator [24]
Fig.16 demonstrates how does LBP operator calculates
center pixel value by visiting to each and every pixel. After this
operations, Fig.17 shows different feature images of different
LBP operators.
Fig. 17. Normalized iris image and its different LBP feature images [24]
After this LBP operation, as Fig.18 shows, chunk feature
encoder will be invoked and feature code will be generated by
going through top to bottom, from left to right in all defined
blocks.
Researcher mentions that experimental results has shown
that this algorithm can get higher recognition rate than the
Fig. 18. Generation of the iris code [24]
traditional iris feature extraction method. According to 19,
even though Daugman has 0.01% higher Correct recognition
rate than proposed method by [24], still ERR of proposed
method is significantly low.
Fig. 19. Comparison of recognition accuracy of various recognition schemes
[24]
6) Convolutional Neural Network: A study has provided
an experimental approach to use deep convolutional features
for iris recognition. Specifically tend to use this for feature
extraction. Approach is using VGG-Net for iris recognition
task [25]. Treating model as a feature extraction engine to
extract features.
After extraction, image classification should be used to find
corresponding label for iris image. There it was proposed to
use multi-class SVM (Multi-Class Support Vector Machine)
for iris image classification [25].
According to [25], there were experimental tests using IIT
Delhi iris image database and CASIA-Iris-1000 database. Af-
ter resizing training images for the network and after training,
evaluation of recognition accuracy of fully connected layer
(fc6) for different PCA (Principal Component Analysis) has
been evaluated for both iris databases. Fig. 20 (a) and (b)
shows accuracy levels and it has obtained 98% of accuracy
level for IIT Delhi iris database images in 100 PCAs. Also
90% of accuracy has obtained for CASIA-Iris-1000 database
images in 200 PCAs [25].
Also, for the evaluation of performance of the VGG-Net
for this task has shown high accuracy rate in Fig. 21 for any
level after 7th layer, with 98% of minimum accuracy. Also,
there is a drop of accuracy after level 11 and one reason for
that could be when the neural network layers identify abstract
data which do not discriminate iris patterns from one another
as suggested in [25].
Fig. 20. (a) For IIT Delhi Recognition accuracy against Number of PCAs
(b) For CASIA-Iris-1000 Recognition accuracy for Number of PCAs [25]
Fig. 21. Accuracy against VGG-Net Layers after 5th Layer [25]
E. Template Matching
After features being extracted and stored as feature tem-
plates in the database, there is a need of metric to match
two templates to identify similarities and differences to make
judgements. There has to define threshold value to identify
inter class comparisons and inter class comparisons.
1) Hamming Distance: For matching one of the best so-
lutions were Hamming distance [4]. Hamming distance is
employed to do bit wise comparison which is necessary to
identify similarities and differences.
HD =k(codeA NcodeB)TmaskA TmaskBk
kmaskA TmaskBk(10)
In hamming distance, N(XOR) operator has used to
identify difference between any corresponding bits while op-
erator ensures that both compared bits are not influenced by
any external factor such as eyelids, eyelashes, illumination
inconsistencies or different kind of noises. Then normalized
values of numerator and denominator will use to calculate
corresponding HD (Hamming Distance) [4]. Also, in hamming
distance equation, Code A and Code B are bit pair of iris
templates A and B which going to be compared. Mask A is
corresponding mask values of that bit and same applies to
Mask B. Thus, in [26], denominator part calculate total of
how many bits were affected by external factors. So, this is
a comparison of dissimilarity. If Hamming distance is zero,
two templates are perfectly matching while if HD is 0.5, there
two templates are independent. A threshold will be calculated
afterwords to identify whether these iris templates are from
same person or not.
Theoretically hamming distance of same intra-class compar-
isons should have 0. But because of different kind of factors
like imperfect normalization, undetected noises may influence
making identical templates. As mentioned in normalization,
normalization could not address the problem of rotation when
capturing the iris image. In order to achieve best comparison
results, bit-wise shifting has been proposed by Daugman.
As proposed, bit-wise shifting done in horizontal way to
correct original iris image capture rotations. If angular resolu-
tion 180 is used, each bit of shifting corresponds iris region’s
2 degrees.
Fig. 22. Bit-wise shifting and template comparison [4]
First comparison has shown the hamming distance is 0.83
which says templates are independent. Since there may have
rotational inconsistencies occurred, bit-wise shifting can be
applied. First, shift two bits to left side and calculate hamming
distance. Then again shift 2 bits to right side from the original
location and calculate the hamming distance. Then the lowest
hamming distance will be taken because that is the best match
it can find. Since here lowest is HD = 0, it will take as a
successful comparison.
2) Elastic Graph Matching: EGBM or Elastic Bunch Graph
Matching or Elastic Graph Matching is an algorithm for
recognizing objects or object classes in an image based on
a graph representations of other images. [27] Template graph
MTwill be compared against Image graph MIwhich generated
by set of points. Considering a cost function this matching will
be carried out. As researcher shows, proposed EGM has higher
CRR compared to popular feature matching methods, but still
lower than 2D-Gabor filter technique introduced by Daugman
in 1993.
3) Support vector machines: In machine learning, support-
vector machines(SVM) are supervised learning models with
associated learning algorithms that analyze data used for
classification. In this [28], researcher has used SVM for feature
matching. Since this is a binary classification of if this is
a match or not, researcher highlights two aspects. First is
that the determination of the optimal hyperplane and other is
transformation of non-linearly separable classification problem
[28].
as Fig.23 shows, linear separable binary classification which
has Class 1 and Class 2. It demonstrates a classification
without any miss-classified data. Also in a linear separable
problem, there is a linear boundary hyperplane which separates
those data. if we consider it as,
w.x +b= 0 (11)
it implies that,
yi(w.xi+b)1, i = 1,2, ..., N (12)
Fig. 23. SVM with Linear separable data. [28]
Using the above equations and considering the distance
between hyperplanes, support vector machine will be able to
classify data as researcher claims. Also if this is not catego-
rized as a linear separable problem, using higher dimensions,
it can transformed again into linear problem.
Researcher has used FRR and FAR to evaluate the SVM
model and claims it has achieved excellent FAR values but
FRR has to be improved.
4) Weighted Euclidean distance: In [16] they have used
Weighted Euclidean distance to match feature templates.
Weighted distance can be considered to find the most and
least favorable situations. Thus, feature templates consists of
numerical values can be used to find the most suitable feature
template by reducing the distance according to Equation 13.
W ED (k) =
N
X
i=1
(fif(k)
i)2
(δ(k)
i)2(13)
K is the template, fidenotes ith feature, f(k)idenotes ith
feature of the kth template. N denotes total number of features
extracted.
F. Template Security
Most important factor in iris recognition security system is
keeping the iris templates secure [29].
1) Least Significant Bit method: Most of the times hackers
are expecting to attack iris template databases. To keep these
templates secure, LSB (Least Significant Bit) steganography
can be used [29].
Fig. 24. Use Cover Image to hide Iris code. [29]
As in Fig.24, after extracting features from iris image and
making the iris code, it uses 24 bits covering image and get
LSB of color component blue to hide iris code. Then after
generating random number sequence to hide the iris code,
stego image can be get as the output and store in the database.
When it comes to the recognition process, it gets the stego
image from the database and obtain LSB in the stego image
where iris code is hidden. Extract the iris code and using
suitable template matching technique, compare templates [29].
2) Template Protection: In [30], researcher introduce sev-
eral methods of iris template protection techniques. As ideal
protection method should have following properties which are
Diversity, Revocability, Security and Performance.
First what researcher discuss is feature transformation. this
method also has two categories which are invertible and
non-invertible feature transformation. invertible transformation
uses techniques such as salting and bio-hashing to transform
template features using a function specified to a particular
user. This method has advantages such as low FAR, multiple
transformed templates can be generated for the same user
...etc. This feature has one major issue which is if the user
key is compromised, then the security for the template is
no longer valid. in non-invertible transformation, one way
functions such as hash functions will be used. Once the new
template comes, a new hash value for that particular template
will be generated and will compare against all the hash values
of the database. Main advantage is that no one will be able to
extract template even through the password is compromised.
The main drawback of this method is the non-inversibility of
the template.
Biometric Cryptosystems is new approach which is com-
bining biometrics with cryptography to provide high level
security for templates. n a biometric cryptosystem, some
public information about the biometric template is stored. This
public information is usually referred to as helper data and
hence, biometric cryptosystems are also known as helper data-
based methods [30].
III. DISCUSSION
Most of the systems in iris recognition able to perform
accurately upto some extent. But those are still can be tuned-
up in stages like iris segmentation, eyelids and eyelashes
detection, template matching, ..etc. A recent research has
used completely CNN based solution for iris segmentation in
dead subjects and as they have claimed, it has outperformed
some commercial product algorithms.Another research from
2019 have find a promising method to process segmented
image for image normalization if segmentation carryout by
a DCNN model. These researches indicates how accurately
we can carry out these phases by acquiring the machine
learning models such as deep convolutional neural networks.
We believe, if that worked for dead subjects, this may also
can be used for living subjects as well after few adjustments.
Image acquisition phase can be further improved with the
help of fuzzy image processing and advanced image sensors.
Even if the image sensor failed to stonewall the affects of
glare, Homomorphic filtering can be used upto some extent
to remove these non-uniform illuminations before sending
captured image for image segmentation.
Similarly, feature extraction and feature matching should be
able to improve further. Few years back some researchers have
work hard to find efficient feature extraction methods based
on CNNs and as results indicate, they have succeeded. But the
main issue which is clearly visible, is that the use of famous
iris databases for training. These data sets comprise images
that are captured under controlled condition and they do not
cover almost all the practical situations such as glares appear
with eye glasses. These possible scenarios are mandatory
in neural network training to get best results and publicly
available data sets can be used to validation and testing of
proposed models. Since the release of latest google database
search engine, researchers can find images across different
databases which will cover all possible scenarios under real
environment conditions and researchers can create their own
data sets for training.
IV. CONCLUSION
This paper provides a review of well-known iris recog-
nition algorithms that are proposed by different researchers
from time to time. Almost all the researches follow main
steps of iris recognition which are acquisition, segmenta-
tion, normalization, feature extraction and feature matching.
Currently methods which uses machine learning models like
DCNN, CNN, Capsule networks shows good results compared
to similar researches. As discussed in Discussion phase, if
researchers accommodate proposed improvements, we believe
these researches and field can be further improved.
REFERENCES
[1] R. Bremananth, “A Robust Eyelashes and Eyelid Detection in Transfor-
mation Invariant Iris Recognition: In Application with LRC Security
System,” International Journal of Computer, Electrical, Automation,
Control and Information Engineering, vol. 10, pp. 1825–1831, Oct.
2016.
[2] J. Daugman, “How iris recognition works,” IEEE Transactions on
Circuits and Systems for Video Technology, vol. 14, no. 1, pp. 21–30,
Jan. 2004.
[3] “Improving Iris Recognition Performance Using Segmentation,
Quality Enhancement, Match Score Fusion, and Index-
ing - IEEE Journals & Magazine.” [Online]. Available:
https://ieeexplore.ieee.org/document/4510759
[4] L. Maˇ
sek, “Recognition of Human Iris Patterns for Biometric Identifi-
cation,” 2003.
[5] P. Verma, M. Dubey, S. Basu, and P. Verma, “Hough Transform Method
for Iris Recognition-A Biometric Approach,” vol. 1, no. 6, p. 6, 2012.
[6] W. Kong and D. Zhang, Accurate iris segmentation based on novel
reflection and eyelash detection model,” Feb. 2001, pp. 263–266.
[7] U. Tania, S. Motakabber, and M. Ibrahimy, “Edge detection techniques
for iris recognition system,” IOP Conference Series: Materials Science
and Engineering, vol. 53, Nov. 2013.
[8] “The Comparison of Iris Detection Using Histogram Equalization and
Adaptive Histogram Equalization Methods.
[9] S. H. and S. Malisuwan, “A Study of Image Enhancement for Iris
Recognition,” Journal of Industrial and Intelligent Information, vol. 3,
Jan. 2018.
[10] “Eyelids, eyelashes detection algorithm and houghtransform method
for noise removal in iris recognition |Nadia |Indonesian Journal
of Electrical Engineering and Computer Science.” [Online]. Available:
http://ijeecs.iaescore.com/index.php/IJEECS/article/view/20954
[11] M. Liu, Z. Zhou, P. Shang, and D. Xu, “Fuzzified Image Enhancement
for Deep Learning in Iris Recognition,” IEEE Transactions on Fuzzy
Systems, vol. 28, no. 1, pp. 92–99, Jan. 2020.
[12] M. Trokielewicz, A. Czajka, and P. Maciejewicz, “Post-mortem iris
recognition with deep-learning-based image segmentation,” Image and
Vision Computing, vol. 94, p. 103866, Feb. 2020. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0262885619304597
[13] H. Hofbauer, E. Jalilian, and A. Uhl, “Exploiting superior CNN-based
iris segmentation for better recognition accuracy, Pattern Recognition
Letters, vol. 120, pp. 17–23, Apr. 2019. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0167865518309395
[14] S. Nithyanandam, K. S. Gayathri, and P. L. K. Priyadarshini, “A
New IRIS Normalization Process For Recognition System With
Cryptographic Techniques, arXiv:1111.5135 [cs], Nov. 2011, arXiv:
1111.5135. [Online]. Available: http://arxiv.org/abs/1111.5135
[15] L. Ma, Y. Wang, and T. Tan, “Iris recognition using circular symmetric
filters,” Object recognition supported by user interaction for service
robots, 2002.
[16] Y. Zhu, T. Tan, and Y. Wang, “Biometric personal identification based on
iris patterns,” in Proceedings 15th International Conference on Pattern
Recognition. ICPR-2000, vol. 2, Sep. 2000, pp. 801–804 vol.2, iSSN:
1051-4651.
[17] Junzhou Huang, Yunhong Wang, Tieniu Tan, and Jiali Cui, “A new
iris segmentation method for recognition,” in Proceedings of the 17th
International Conference on Pattern Recognition, 2004. ICPR 2004.
Cambridge, UK: IEEE, 2004, pp. 554–557 Vol.3. [Online]. Available:
http://ieeexplore.ieee.org/document/1334589/
[18] Z. Hussain and D. Agarwal, “A COMPARATIVE ANALYSIS OF EDGE
DETECTION TECHNIQUES USED IN FLAME IMAGE PROCESS-
ING,” 2019.
[19] R. F. Mansour, “Iris Recognition Using Gauss Laplace
Filter, American Journal of Applied Sciences, vol. 13,
no. 9, pp. 962–968, Sep. 2016. [Online]. Available:
http://thescipub.com/abstract/10.3844/ajassp.2016.962.968
[20] C.-L. Tisse, L. Martin, L. Torres, and M. Robert, “Person Identification
Technique Using Human Iris Recognition, in Proc. of Vision Interface,
2002, pp. 294–299.
[21] Y. Wen, J. Zhou, Y. Wu, and M. Wang, “Iris Feature Extraction based
on Haar Wavelet Transform,” International Journal of Security and Its
Applications, vol. 8, pp. 265–272, Jul. 2014.
[22] O. Abikoye, S. S., A. S., and J. Gbenga, “Iris Feature Extraction for
Personal Identification using Fast Wavelet Transform (FWT),” Interna-
tional Journal of Applied Information Systems (IJAIS), vol. 6, pp. 1–6,
Jan. 2018.
[23] Y. He, G. Feng, Y. Hou, L. Li, and E. Micheli-Tzanakou, “Iris feature
extraction method based on LBP and chunked encoding,” in 2011
Seventh International Conference on Natural Computation, vol. 3, Jul.
2011, pp. 1663–1667, iSSN: 2157-9555.
[24] S. Zhu, Z. Song, and J. Feng, “Face recognition using local binary
patterns with image Euclidean distance - art. no. 67904Z,” Proceedings
of SPIE - The International Society for Optical Engineering, Nov. 2007.
[25] S. Minaee, A. Abdolrashidiy, and Y. Wang, An experimental study of
deep convolutional features for iris recognition, Dec. 2016, pp. 1–6.
[26] R. Wildes, “Iris recognition: an emerging biometric technology,”
Proceedings of the IEEE, vol. 85, no. 9, pp. 1348–1363, Sep. 1997.
[Online]. Available: http://ieeexplore.ieee.org/document/628669/
[27] R. Farouk, “Iris recognition based on elastic graph matching and Gabor
wavelets, Computer Vision and Image Understanding, vol. 115, pp.
1239–1244, Aug. 2011.
[28] H. Ali and M. Salami, “Iris Recognition System Using Support Vector
Machines,” Oct. 2011.
[29] S. Chaudhary and R. Nath, “A new template protection approach for
iris recognition,” in 2015 4th International Conference on Reliability,
Infocom Technologies and Optimization (ICRITO) (Trends and Future
Directions), Sep. 2015, pp. 1–6, iSSN: null.
[30] R. Gupta and A. Kumar, “A Study of Iris Template Protection Tech-
niques for a Secure Iris Recognition System,” International Journal of
Engineering Research, vol. 4, no. 02, p. 5.
... In the following part, we evaluate these reviews and point out how our work differs from theirs. [11][12][13] summarize the basic iris recognition process according to the traditional biometric recognition workflow, including image acquisition, preprocessing, image segmentation, feature extraction, and classification. In [11,12], in addition to these above steps, image normalization is also introduced. ...
... [11][12][13] summarize the basic iris recognition process according to the traditional biometric recognition workflow, including image acquisition, preprocessing, image segmentation, feature extraction, and classification. In [11,12], in addition to these above steps, image normalization is also introduced. Meanwhile, these reviews introduce machine learning-based approaches instead of focusing on deep learning methods, which cannot provide a comprehensive insight into the current deep learning-based mainstream. ...
... Specifically, [11] mentions neural network techniques in the feature extraction and classification phases and [12] briefly summarizes the application of CNNs in the iris image segmentation and feature extraction phases, but there is not a comprehensive and systematic summary of deep learning techniques. Additionally, reviews [12,13] lack the summary of influential public iris recognition datasets. Differently, our review offers a comprehensive perspective. ...
Article
Full-text available
Iris recognition is a secure biometric technology known for its stability and privacy. With no two irises being identical and little change throughout a person’s lifetime, iris recognition is considered more reliable and less susceptible to external factors than other biometric recognition methods. Unlike traditional machine learning-based iris recognition methods, deep learning technology does not rely on feature engineering and boasts excellent performance. This paper collects 131 relevant papers to summarize the development of iris recognition based on deep learning. We introduce the background of iris recognition and the motivation and contribution of this survey. Then, we present the common datasets widely used in iris recognition. After that, we summarize the key tasks involved in the process of iris recognition based on deep learning technology, including identification, segmentation, presentation attack detection, and localization. Finally, we discuss the challenges and potential development of iris recognition. This review provides a comprehensive sight of the research of iris recognition based on deep learning.
... In the following part, we evaluate these reviews and point out how our work differs from theirs. [8][9][10] summarize the basic IR process according to the traditional biometric recognition workflow, including image acquisition, pre-processing, image segmentation, feature extraction, and classification. In [8,9], in addition to these above steps, image normalization is also introduced. ...
... [8][9][10] summarize the basic IR process according to the traditional biometric recognition workflow, including image acquisition, pre-processing, image segmentation, feature extraction, and classification. In [8,9], in addition to these above steps, image normalization is also introduced. Meanwhile, these reviews introduce machine learning-based approaches instead of focusing on deep learning methods, which cannot provide a comprehensive insight into the current deep learning-based mainstream. ...
... Meanwhile, these reviews introduce machine learning-based approaches instead of focusing on deep learning methods, which cannot provide a comprehensive insight into the current deep learning-based mainstream. Specifically, [8] mentions neural network techniques in the feature extraction and classification phases, and [9] briefly summarizes the application of CNNs in the iris image segmentation and feature extraction phases, but there is not a comprehensive and systematic summary of deep learning techniques. Additionally, reviews [9,10] lack the summary of influential public IR datasets. ...
Preprint
Full-text available
Iris recognition is a secure biometric technology known for its stability and privacy. With no two irises being identical and little change throughout a person's lifetime, iris recognition is considered more reliable and less susceptible to external factors than other biometric recognition methods. Unlike traditional machine learning-based iris recognition methods, deep learning technology does not rely on feature engineering and boasts excellent performance. This paper collects 120 relevant papers to summarize the development of iris recognition based on deep learning. We first introduce the background of iris recognition and the motivation and contribution of this survey. Then, we present the common datasets widely used in iris recognition. After that, we summarize the key tasks involved in the process of iris recognition based on deep learning technology, including identification, segmentation, presentation attack detection, and localization. Finally, we discuss the challenges and potential development of iris recognition. This review provides a comprehensive sight of the research of iris recognition based on deep learning.
... At present, the three most mature types of biometric technologies are face recognition [8][9][10], fingerprint recognition [11][12][13] and iris recognition [14][15][16]. In addition to these technologies, palmprint recognition is also one of the concerns researchers [17][18][19][20][21]. Palmprint recognition has unique advantages. ...
Article
Full-text available
In response to the shortcomings of high computational cost and low accuracy of the local recognition descriptor method for palmprint images, this paper proposes a new Local Ordinal Code (LOC), which utilizes three common filters for multi-directional filtering coding. For the demand for large-scale retrieval, a new dimension control factor is proposed to perform linear feature dimensionality reduction to achieve fast matching. In addition, Feature Fusion coding schemes of LOC (FFLOC) are proposed on the basis of LOC, and the obtained feature codes are fused with different weights to achieve higher recognition accuracy. The algorithm proposed in this paper is tested in five publicly available palmprint databases and one self-collected palmprint database, and the best results are obtained on most datasets.
... The edge detection operator (sobel) returns a value for the first derivative in the horizontal direction ( ) and the vertical direction ( ). From this the edge gradient and direction can be obtained using (3) [13]. ...
... Recently, Convolutional Neural Networks (CNN) approach was introduced as one competitive approach iris recognition, [47]. The most recent works that have used the approach of CNN for iris recognition were [48]- [51]. ...
Article
Full-text available
An iris recognition system for person identification is developed with a new method for iris localization. For pupil boundary detection, a method robust to the specular point reflection problem is developed. It consists of morphological filter and two-direction scanning methods. For limbic boundary detection, Wildes method is modified by restricting the process of Canny edge detector and Hough transform to a small Region-Of- Interest (ROI) not exceeding 20% of the image size. For eyelid detection, the method of Refine-Connect-Extend-Smooth (R-C-ES) is used, which detect three possible cases (single eyelid, both eyelids and free iris). For iris normalization, rubber-sheet model transform is used and for iris coding Gabor filter is used. The performance of the system is evaluated for the individual stages and for the whole system using three different databases (CASIAV1.0, CASIA-V4.0-Lamp and SDUMLA-HMT). The accuracy of correct detection reached 99.9%-100% for pupil boundary and 99.6%-99.9% for limbic boundary detection. For eyelid detection; the accuracy reached 93.2%-97.6% for upper eyelid, 95.3%- 99.15% for lower eyelid and 96.7%-96.92% for free iris (iris not occluded by eyelids). The overall accuracy and the Equal Error Rate (EER) of the system for CASIA-V1.0 database are 96.48% and 1.76%, for CASIA-V4.0-Lamp, are 95.1% and 2.45%, and for SDUMLA-HMT are 93.6% and 3.2%.
Book
Full-text available
EFFECT OF SOLAR THERMAL RADIATION AND MAGNETIC FIELD ON THE FLOW OF CASSON FLUID Abstract In this paper, the behavior of the non-Newtonian fluid and heat transfer is considered in an exponentially stretching surface in the occurrence of porous magnetic field, as it has vast applications in the field of several industries. An explicit Finite difference method is applied for the solution of governing equations. By considering governing equations of the mathematical model as a platform non-linier differential equation have been reduced to ordinary differential equations by using similarity transformations. BVP4C software technique is used for finding the solutions and results are presented in the form of tables and graphs for several equations. After simulation, it is found that the heat transfer rate decreases with higher values of the magnetic field as well as the radiation. The temperature profile and velocity profile of the Casson fluid flow is directly related to several parameters. Finally, it is observed that several parameters such as Casson fluid parameter, radiation parameter, Prandtl number, and Eckert number are stable at the point of convergence.
Conference Paper
Full-text available
Iris Recognition Technology is a biometric method that takes an image of an eye pattern, converts the image into a binary template and then saves the data on a server for future matching. Iris cameras are used to learn the unique identity of an individual by analyzing unique random patterns that appear clearly inside the eye from a certain distance. It uses multiple techniques including optics, statistical inferences, pattern recognition and many more. Because the market is primarily driven by the increasing demand for security systems, some information about the market for this technology and future predictions has been presented in this paper.
Article
Full-text available
This paper proposes the first known to us iris recognition methodology designed specifically for post-mortem samples. We propose to use deep learning-based iris segmentation models to extract highly irregular iris texture areas in post-mortem iris images. We show how to use segmentation masks predicted by neural networks in conventional, Gabor-based iris recognition method, which employs circular approximations of the pupillary and limbic iris boundaries. As a whole, this method allows for a significant improvement in post-mortem iris recognition accuracy over the methods designed only for ante-mortem irises, including the academic OSIRIS and commercial IriCore implementations. The proposed method reaches the EER less than 1% for samples collected up to 10 hours after death, when compared to 16.89% and 5.37% of EER observed for OSIRIS and IriCore, respectively. For samples collected up to 369 h post-mortem, the proposed method achieves the EER 21.45%, while 33.59% and 25.38% are observed for OSIRIS and IriCore, respectively. Additionally, the method is tested on a database of iris images collected from ophthalmology clinic patients, for which it also offers an advantage over the two other algorithms. This work is the first step towards post-mortem-specific iris recognition, which increases the chances of identification of deceased subjects in forensic investigations. The new database of post-mortem iris images acquired from 42 subjects, as well as the deep learning-based segmentation models are made available along with the paper, to ensure all the results presented in this manuscript are reproducible.
Article
Full-text available
Abstract—Biometric authentication is an essential task for any kind of real-life applications. In this paper, we contribute two primary paradigms to Iris recognition such as Robust Eyelash Detection (RED) using pathway kernels and hair curve fitting synthesized model. Based on these two paradigms, rotation invariant iris recognition is enhanced. In addition, the presented framework is tested with real-life iris data to provide the authentication for LRC (Learning Resource Center) users. Recognition performance is significantly improved ... scholar.waset.org/1999.4/10006725
Article
Deep learning techniques such as Convolutional Neural Network and Capsule Network have attained good results in iris recognition. However, due to the influence of eyelashes, skin, and background noises, the model often needs many iterations to retrieve informative iris patterns. Also because of some non-ideal situations, such as reflection of glasses and facula on the eyeball, it is hard to detect the boundary of pupil and iris perfectly. Under such a circumstance, discarding the rest parts beyond the boundary may cause losing useful information. Hence, we use Gaussian, triangular fuzzy average and triangular fuzzy median smoothing filters to preprocess the image by fuzzifying the region beyond the boundary to improve the signal-to-noise ratios. We applied the enhanced images through fuzzy operations to train deep learning methods, which speeds up the process of convergence and also increases the recognition accuracy rate. The saliency maps show that fuzzified image filters make the images more informative for deep learning. The proposed fuzzy operation of images may be a robust technique in many other deep-learning applications of image processing, analysis and prediction.
Article
CNN-based iris segmentations have been proven to be superior to traditional iris segmentation techniques in terms of segmentation error metrics. To properly utilize them in a traditional biometric recognition systems requires a parameterization of the iris, based on the generated segmentation, to obtain the normalised iris texture typically used for feature extraction. This is an unsolved problem. We will introduce a method to parameterize CNN based segmentation, bridging the gap between CNN based segmentation and the rubbersheet-transform. The parameterization enables the CNN segmentation as full segmentation step in any regular iris biometric system, or alternatively the segmentation can be utilized as a noise mask for other segmentation methods. Both of these options will be evaluated.
Conference Paper
Iris is one of the popular biometrics that is widely used for identity authentication. Different features have been used to perform iris recognition in the past. Most of them are based on hand-crafted features designed by biometrics experts. Due to tremendous success of deep learning in computer vision problems, there has been a lot of interest in applying features learned by convolutional neural networks on general image recognition to other tasks such as segmentation, face recognition, and object detection. In this paper, we have investigated the application of deep features extracted from VGG-Net for iris recognition. The proposed scheme has been tested on two well-known iris databases, and has shown promising results with the best accuracy rate of 99.4%, which outperforms the previous best result.
Article
Biometrics deals with recognition of individuals based on their behavioral or biological features. The recognition of IRIS is one of the newer techniques of biometrics used for personal identification. It is one of the most widely used and reliable technique of biometrics. In this study a novel approach is presented for IRIS recognition. The proposed approach uses Gauss Laplace filter to recognize IRIS. The proposed approach decreases noise to the maximum extent possible, retrieves essential characteristics from image and matches those characteristics with data in a database. This method will be effective and simple and can be implemented in real time. The experiments are carried out using the images of IRIS acquired from a database and MATLAB application has been applied for its effective and simple manipulation of IRIS image. It was observed that developed approach has more accuracy and a relatively quicker time of execution than that of the existing approaches.
Article
To improve the accuracy of iris recognition system, we propose an efficient algorithm for iris feature extraction based on 2D Haar wavelet. Firstly, the iris image is decomposed by the 2D Haar wavelet three times, and then a 375-bit iris code is obtained by quantizing all the high-frequency coefficients at third lever. Finally we use similarity degree function as matching scheme. Experimental results on CASIA iris database show that our algorithm has the encouraging correct recognition rate (CRR) which is only 99.18%, accompanying with very low equal error rate (EER) 0.54%.