Available via license: CC BY-NC-ND 4.0
Content may be subject to copyright.
Procedia Computer Science 102 ( 2016 ) 26 – 33
1877-0509 © 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of the Organizing Committee of ICAFS 2016
doi: 10.1016/j.procs.2016.09.365
ScienceDirect
Available online at www.sciencedirect.com
12th International Conference on Application of Fuzzy Systems and Soft Computing, ICAFS
2016, 29-30 August 2016, Vienna, Austria
Biometric retina identification based on neural network
Fahreddin Sadikoglua,*, Selin Uzelaltinbulatb
aDepartment of Electrical and Electronic Engineering, Near East University, P.O.Box:99138, Nicosia, North Cyprus, Mersin 10 Turkey
bDepartment of Computer Engineering, Near East University, P.O.Box:99138, Nicosia, North Cyprus, Mersin 10 Turkey
Abstract
In this paper the design of recognition system for retinal images using neural network is considered. Retina based recognition is
perceived as the most secure method for identification of an identity used to distinguish individuals. The retina recognition stages
including retina image acquisition, feature extraction and classification of the features are discussed. The structure of the neural
network based retina identification is presented. Training of neural network based recognition system is performed using
backpropagation algorithm. The structure of neural networks used for retina recognition and its learning algorithm are described.
The implementation of recognition system has been done using MATLAB package.
© 2016 The Authors. Published by Elsevier B.V.
Peer-review under responsibility ofthe Organizing Committee of ICAFS 2016.
Keywords: Neural network; retina recognition; backpropagation algorithm.
1. Introduction
Biometric recognition, or biometrics, refers to the automatic identification of a person based on his/her
anatomical (e.g. fingerprint, iris) or behavioural (e.g. signature) characteristics or traits. This method of
identification offers several advantages over traditional methods involving ID cards (tokens) or PIN numbers
(passwords) for various reasons:
xthe person to be identified is required to be physically present at the point-of-identification,
xIdentification based on biometric techniques obviates the need to remember a password or carry a token.
With the increased integration of computers and internet into our everyday life, it is necessary to protect sensitive
and personal data. By replacing PINs (or using biometrics in addition to PINs), biometric techniques can potentially
prevent unauthorized access to ATMs, cellular phones, laptops, and computer networks. Unlike biometric traits,
PINs or passwords may be forgotten, and credentials like passports and driver's licenses may be forged, stolen, or
lost18. As a result, biometric systems are being deployed to enhance security and reduce financial fraud. Various
biometric traits are being used for real-time recognition; these are fingerprint recognition, facial recognition, iris
recognition, hand geometry recognition, voice recognition, keystroke recognition, signature recognition, speech
© 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Peer-review under responsibility of the Organizing Committee of ICAFS 2016
27
Fahreddin Sadikoglu and Selin Uzelaltinbulat / Procedia Computer Science 102 ( 2016 ) 26 – 33
recognition and retinal recognition3. Nowadays these biometrics are becoming to be used to attain higher security
and to handle failure and enrol situations. A biometric system is essentially a pattern recognition system that
operates by acquiring biometric data from an individual, extracting a feature set from the acquired data, and
comparing this feature set against the template set in the database. One of the biometric technologies used for
identification of the persons is the retinal identification. Retinal identification is an automatic method that provides
true identification of the person by acquiring an internal body image- the retina/choroid of a willing person who
must cooperate in a way that would be difficult to counterfeit1. The human retina is a thin tissue composed of neural
cells . Because of the complex structure of the capillaries the retina, each person's retina is unique. The network
of blood vessels in the retina is not entirely genetically determined and thus even identical twins do not share a
similar pattern. The blood vessels at the back of the eye have a unique pattern for each person. In blood vessel are
segmented and used for recognition of retina images presents various segmentation algorithms of blood vessels are
presented for identification of retinal images. Retina identification has found application in very high security
environments (nuclear research and weapons sites, communications control facilities and a very large transaction-
processing centre). In the paper the design of retina identification system using neural networks is presented. The
design of such system will allow automating the personal identification using retina7. The paper is organised as
follows. Sec.2. describes the structure of retina recognition system. Sec.3. describes the retina recognition system
using Neural Networks. Sec.4. describes experimental results obtained for retina identification system. Sec.4.
presents the conclusion of the paper.
2. Retina Identification System
Retina recognition technology captures and analyzes the patterns of blood vessels on the thin nerve on the back
of the eyeball that processes light entering through the pupil. Retinal patterns are highly distinctive traits. Every eye
has its own totally unique pattern of blood vessels; even the eyes of identical twins are distinct. Although each
pattern normally remains stable over a person's lifetime, it can be affected by disease such as glaucoma, diabetes,
high blood pressure, and autoimmune deficiency syndrome. The fact that the retina is small, internal, and difficult to
measure makes capturing its image more difficult than most biometric technologies. An individual must position the
eye very close to the lens of the retina-scan device, gaze directly into the lens, and remain perfectly still while
focusing on a revolving light while a small camera scans the retina through the pupil. Any movement can interfere
with the process and can require restarting. Enrolment can easily take more than a minute. The generated template is
only 96 bytes, one of the smallest of the biometric technologies10. One of the most accurate and most reliable of the
biometric technologies, it is used for access control in government and military environments that require very high
security, such as nuclear weapons and research sites. However, the great degree of effort and cooperation required of
users has made it one of the least deployed of all the biometric technologies. Newer, faster, better retina recognition
technologies are being developed. The overall retinal scanning process may be broken down into three sub-
processes:
i. Image acquisition,
ii. Computer based processing,
iii. Features extraction and identification.
The block diagram of the designed retina recognition system is given in Fig. 1. The retina recognition includes
three phases: Image/signal acquisition, pre-processing and image classification (recognition).The image acquisition
and processing phase are the most complicated. This sub-process may be completed largely depends on user
cooperation. For scanning, the users’ eye must be positioned very close to the lens. Moreover, glasses must be
removed to avoid signal interference. On looking into the camera, the user sees a green light against a white
background. Once the camera is activated, the green light moves in a complete circle with 360 degrees. The blood
vessel pattern of the retina is captured during this process. The three to five images are captured at this stage.
Depending on the level of user cooperation, the capturing phase can take as long as one minute. The retinal image
acquisition is presented in1. During image acquisition, the retina images must be clear and sharply1.Image/signal
acquisition and conversion (capturing an image of the retina and converting it to a digital format). Clarity of the
retina’s and its sharpness affect on the quality of the retina images. The next stage involves data extraction. As
genetic factors do not dictate the pattern of the blood vessels, the retina contains a diversity of unique features. In
pre-processing stage, the retina is extracted from an eye image and then using segmentation procedure the vascular
28 Fahreddin Sadikoglu and Selin Uzelaltinbulat / Procedia Computer Science 102 ( 2016 ) 26 – 33
representation of retinal images is obtained. This image after normalization and enhancement is represented by the
feature vector that describes converted numeric values of the retinal image. For classification neural network is used.
Feature vector becomes the training data set for the neural network. The retina classification system includes two
operation modes: training mode and online mode. At first stage, the training of recognition system is carried out
using grayscale values of retina images. After training, in online mode, neural network performs classification and
recognizes the patterns that belong to a certain retinal images.
Fig. 1. A block diagram of the retina recognition system
3. Neural Network Based Retina Recognition System
In the paper feed-forward neural network is applied for identification of retina images. The used NN include
input, hidden, and output layers. The sigmoid activation function is used in the neurons of hidden and output layers.
Once the neurons for the hidden layer are computed, their activations are then fed to the next layer until all the
activations finally reach the output layer. Each output layer neuron is associated with a specific classification
category. In a multilayer feed-forward network at Fig. 2. each neuron of previous layers is connected the neurons of
next layers by using weight coefficients. In computing the value of each neuron in the hidden and output layers one
must first take the sum of the weighted sums and the bias and then apply activation function f(sum) (the sigmoid
function) to calculate the neuron's activation. The extracted features of the anthemia diseases are inputs of neural
networks. In this structure, x1, x2, …, xm are input features that characterize the anthemia diseases. The j-th output of
two layer neural networks is determined by the formula (1).
11
(())
1
() 1
hm
jk jkj iji
ji
whe
yf vf wx
fre e
¦ ¦
¦¦
(1)
Where wij are weights between the input and hidden layers of network, vjk are weights between the hidden and
output layers, f is the sigmoid activation function that is used in neurons, xi is input signal. Here k=1,..,n,j=1,..,h,
i=1,..,m,m, h and n are the numbers of neurons in input, hidden and output layers, correspondingly.
Fig. 2. Multilayer feed-forward network
After activation of neural network, the training of the parameters of neural network starts. Neural network is trained
using anthemia data set taken from UCI library. During learning the 10-fold cross validation is used for evaluation
Pre-processing
Retina
image
acquisition
Feature
vector Output
Patterns
Classification
.
:
.
:.
:
x1
x2
xm
y1
y2
y
k
f
(¦)
f
(¦)
29
Fahreddin Sadikoglu and Selin Uzelaltinbulat / Procedia Computer Science 102 ( 2016 ) 26 – 33
of classification accuracy. There should be set of experiments in order to achieve required accuracy in the neural
network output. The simulation is performed using different number of neurons in hidden layer. The number of
output neurons was 8 which was equal to the number of classes. The backpropagation algorithm is applied for
training of neural network8. Neural network training consists of minimizing the usual least-squares cost function (2):
2
1
)(
2
1yyE d
O
p
¦
(2)
Where O is the number of training samples for each class, yd and y is the desired and current outputs of the p input
vector. The training of the neural network parameters has been carried out in order to generate a proper neural
networks model. The parameters jkij vw ,(i=1,...,m , j=1,...,h, k=1,…,n) of NNs are adjusted using the following
formulas (3). ()
(1) () ((1) ());
()
(1) () ((1) ());
ij ij ij ij
ij
jk jk jk jk
jk
Et
wt wt wt wt
w
Et
vt vt vt vt
v
JO
JO
w
w
w
w (3)
where,
J
is the learning rate, i=1,...m; j=1,...h; k=1,…,n; m, h, n are the number of inputs, hidden and output
neurons of the network. The derivatives are determined as (4):
() () ()y(1)
() () ;
() (); y(1)v; y(1)w;
d
kkkk kj
k
jk k jk
j
k
ij k j ij
j
dk
kk k kjk j jij
k
kjij
y
Et Et yy yy
vyv
y
y
Et Et
wyyw
y
y
Et yy y y
yyw
w
ww
www
w
w
ww
wwww
w
w
w
www
¦
¦
(4)
4. Simulation
4.1. Pre-processing
The design of retina identification system is considered using neural networks. At the start stage image database
including retinal images is used for design purpose. For this reason in the paper we use DRIVE database which is
publicly available database. The RGB retina images are transformed into grayscale images. A grayscale image is
simply one in which the only colors are shades of grey. The reason for differentiating such image from any other
sort of the color image is that less information needs to be provided for each pixel. In fact a gray color is one in
which the red, green, and blue components all have equal intensity in RGB space, and it is necessary to specify a
single intensity value for each pixel, as opposed to the three intensities needed to specify each pixel in all color
images. Often, the grayscale intensity stored as an 8-bit integer giving 256 possible different shades of gray from
black to white. Fig. 3. (a) shows us colored RGB retina image that is transformed to the grayscale image of retina
(b). In the paper for identification of retinal images we are using segmentation results of these images. In the result
of segmentation the vascular representation of retinal images are obtained. (c) shows the results of segmentation of
retinal image. The obtained image is scaled Fig. 4. and used for recognition purpose. Scaling is defined as the
increase or reduction of image size by a fixed ratio. We first smooth the image by convolution with a spatially
resolution. However, in a scale down by a specific factor in the respective directions, the image width to height ratio
30 Fahreddin Sadikoglu and Selin Uzelaltinbulat / Procedia Computer Science 102 ( 2016 ) 26 – 33
of the reduced results remains equal to that of the original image width to height ratio. Scaling is applied for
decreasing of input data size.
(a) (b) (c)
Fig. 3. RGB (a) greyscale ;(b) and segmented ;(c) retina image of DRIVE database
Fig. 4. Scale down of retina image.
The input feature vectors are obtained by converting the segmented retinal image into numeric values. But the
size of such input vector will be large. Therefore the scaled image is segmented and averaged. This operation is
based on averaging pixel values within segments of a pattern, thus yielding one average pixel value per segment.
The average of each k-th segment is calculated as (5):
11
(, )
()
NM
ij
k
x
ij
av NM
¦¦
(5)
The output of each segment is forming feature vector and entering to the neural networks input. The averaging
operation allows to decreasing size of input feature vector substantially.
4.2. Neural network based classification
The neural network based retina recognition system is modelled in Matlab. Fig. 5. describes the network
structure of recognition system. Retina recognition system based on neural networks uses a three-layers; Input,
Hidden, and Output layers. Neural networks are trained using data set extracted from the retina images. As
mentioned above vessel representation of retina images is digitised and transformed into numeric values. The
average values of each segment are input for the neurons of input layer as shown in Fig. 5. The output of the input
neurons are input of the hidden layer, each possible answer is represented by a single output neuron. As in most
networks, the data is encoded in the links between neurons.
31
Fahreddin Sadikoglu and Selin Uzelaltinbulat / Procedia Computer Science 102 ( 2016 ) 26 – 33
- - -
- - -
- - -
Fig. 5. Neural network structure
The training of network is implemented using Matlab package. For experimental study DRIVE database is
taken. The network is initially trained for a maximum of 10000 epochs. The parameters of the networks are selected
as follows:
Fig. 6. Initialization the parameters of neural networks
In biometrics based system, the accuracy of implemented algorithms is very important and they must be tested
properly. The images from DRIVE7 dataset is used in order to check the validity and accuracy of proposed system.
Fig. 7. shows different images from DRIVE database. DRIVE database also include ground truth for vascular
segmentation. It includes 233 retinal images with a resolution of 768x584, from 139 different persons. The proposed
retinal recognition system is tested on total 40 images.
I1,1
I2,1
I40,1
I1,2 I1,3 I1,10
I2,2 I2,3 I2,10
I40,2 I40,3 I40,10
I1,1
I1,2
I1,3
I40,9
I40,10
H1
H2
H3
H16
O1
O2
H40
- - -
- - -
- - -
----
----
- - -
- - -
-
-
-
32 Fahreddin Sadikoglu and Selin Uzelaltinbulat / Procedia Computer Science 102 ( 2016 ) 26 – 33
Fig. 7. Retina images taken from DRIVE database
Retinal image consists of a unique pattern in each individual and it is almost impossible to forget that pattern in
a false individual. However, its high cost and acquisition related drawbacks have prevented it from making a
commercial impact. A feature vector is formed using vascular segmentation results of retinal images. This feature
vector with the neural network is applied for recognition of retinal images. For training of neural network
backpropagation algorithm is applied. Training of neural network used for recognition of retinal images is shown in
Fig. 7. As shown in figure training is performed for 10000 epochs, with accuracy of 0.1. After neural network
training the recognition of images have been done. Table 1 depicts the accuracy of neural network based classifier
using different number of hidden neurons. The %97.5 recognition rate is obtained with neural network having 35
neurons. Table 2 shows the comparison of recognition rates of different methods on DRIVE databases. The
simulation results show that Neural Network based system achieves a satisfactory recognition rate for DRIVE
database and the proposed system can be used in a biometric based personal identification system.
Fig. 8. Performance of neural network training
Table 1. Simulation results of NN based retina recognition system
Table 2 depicts performance comparisons of proposed method with other existing method on DRIVE
database.
01000 2000 3000 4000 5000 6000 7000 8000 9000 10000
10
-2
10
-1
10
0
10
1
10
2
10
3
Best Training Performance is 2.0009 at epoch 9956
Sum Squared Er ror (sse)
10000 Epochs
Trai n
Best
Goal
Number of hidden neurons RMSE Accuracy
8 0.070726 85
16 0.039545 92.5
25 0.030626 95
35 0.017694 97.5
33
Fahreddin Sadikoglu and Selin Uzelaltinbulat / Procedia Computer Science 102 ( 2016 ) 26 – 33
Table 2.Comparative results.
Conclusion
The paper is devoted to the synthesis of neural network based retina recognition system. Retina based
recognition is one of secure method for identification of an identity. The structure of recognition system of retinal
images is designed. The system includes pre-processing and classification stages. Pre-processing is applied to
transform retina images to greyscale values and extract input features from the images. These features are input
signal for Neural Network. Neural Network is applied to classify retina patterns in a recognition step. The operation
principle and learning algorithm of neural network based retina recognition system are presented. DRIVE retina
database is used in simulation. Implementation of retina recognition system is done by using MATLAB package.
The located retina images after pre-processing are represented by a data set. Using this data set as input signal, the
neural network is used to recognize the retina patterns. The recognition accuracy for image patterns was achieved as
97.50%.
References
1. Robert BH. Biometrics:Personal identification in networked society. Edited by A. K. Jain, R. Boille, S. Pankanti. Springer: Berlin; 1999.
2. Sichmid N. Retina Identification Biometric Systems; 2004.
3. Rahib HA, Altunkaya K. Neural network based biometric personal identification with fast iris segmentation. International Journal of
Control Automation and Systems 2009; Volume 7. No.1.
4. Rahib HA, Kilic K. Robust feature extraction and iris recognition for biometric personal identification 2011; InTech Open Access
Publisher.
5. Rahib HA, Altunkaya K. Personal iris recognition using neural networks. International Journal of Security and its Applications; April 2008:
Volume 2. No.2.
6. Akram MU, Tariq A, Khan SA. Retinal recognition:Personal identification using blood vessels. 6th International Conference on Internet
Technology and Secured Transactions. Abu Dhabi: United Arab Emirates; 11-14 December 2011.
7. Amiri MD, Tab FA, Barkhoda W. Retina identification based on the pattern of blood vessels using angular and radial
partitioning. Proceedings Advanced Concepts for Intelligent Vision Systems. Bordeaux: France; 2009.
8. Gonzalez RC, Woods RE. Digital Image Processing; 1992.
9. Staal J, Abramoff MD, Niemeijer M, Viergever MA, Ginneken BV. Ridge-Based vessel segmentation in color images of the retina. IEEE
Trans. Med. Image April 2004; Volume 23. No.4: 501-509.
10. Soares JVB, Leandro JJG, Cesar RM, Jelinek HF, Cree MJ. Retinal vessel segmentation using the 2-D gabor wavelet and supervised
classification. IEEE Trans on Med. Image 2006; Volume 2. No. 9: 1214-1222.
11. Frucci M, Riccio D, Baja GS, Serina L. Segmenting vessels in retinal images; 2015.
12. Sreejini KS, Govindan VK. Improved multiscale matched filter for retinal vessel segmentation using PSO algorithm. Egyptian Informatics
Journal 2015.
13. Babu KG, Subbaiah PV, Satya ST. Unsupervised fuzzy based vessel segmentation in pathological digital funds images. J. Med. Syst.2010.
14. Elena MP, Hughes AD, Thom SA, Bharath AA, Parker KH. Segmentation of blood vessels from red-free and fluorescein retinal images.
Med Image Anal. 2007.
15. Perez ME, Hughes AD, Thorn SA, Parker KH. Improvement of a retinal blood vessel segmentation method using the insight segmentation
and registration toolkit (ITK) in: engineering in medicine and biology society. EMBS 2007. 29th annual international conference of the
IEEE; 2007:892–895.
16. Zana F, Klein JC. Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation. Image Process. IEEE
Trans 2001;10(7):1010–9.
17. Fraz MM, Barman SA, Remagnino P, Hoppe A, Basit A, Uyyanonvara B, Rudnicka AR, Owen CG. An approach to localize the retinal
blood vessels using bit planes and center line detection. Comput Methods Prog. Biomed. 2012;108(2): 600–16.
18. Ibrahim D. A dan Z ye Matlab ile çalÕúmak; 2004.
Techniques Total Images Recognition Rate
Kande [16] 40 89.11
Martinez-Perez [17] 40 91.81
Perez [18] 40 93.20
Zana and Klein [19] 40 89.84
Fraz [20] 40 94.30
This paper 40 97.50