Article

Iris Recognition Based on Human-Interpretable Features

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The iris is a stable biometric trait that has been widely used for human recognition in various applications. However, deployment of iris recognition in forensic applications has not been reported. A primary reason is the lack of human-friendly techniques for iris comparison. To further promote the use of iris recognition in forensics, the similarity between irises should be made visualizable and interpretable. Recently, a human-in-the-loop iris recognition system was developed, based on detecting and matching iris crypts. Building on this framework, we propose a new approach for detecting and matching iris crypts automatically. Our detection method is able to capture iris crypts of various sizes. Our matching scheme is designed to handle potential topological changes in the detection of the same crypt in different images. Our approach outperforms the known visible-feature-based iris recognition method on three different data sets. In particular, our approach achieves over 22% higher rank one hit rate in identification, and over 51% lower equal error rate in verification. In addition, the benefit of our approach on multi-enrollment is experimentally demonstrated.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This human-machine pairing is important as human subjects can provide an incorrect decision even despite spending quite sometime observing many iris regions [120]. In addition, there has been a body of research showing that humans and machines do not perform similarly well under different conditions [20,108,154]. For example, Moreira et al. also showed that machines can outperform humans in healthy easy iris image pairs; however, humans outperform machines in disease-affected iris image pairs [108]. ...
... Interestingly, the need for human-interpretable methods has been raised even from the handcrafted era. For example, Shen et al. published a series of work [20,152] on using iris crypts for iris matching. Iris crypts are clearly visible to humans in a similar way as finger minutiae. ...
Preprint
Full-text available
In this survey, we provide a comprehensive review of more than 200 papers, technical reports, and GitHub repositories published over the last 10 years on the recent developments of deep learning techniques for iris recognition, covering broad topics on algorithm designs, open-source tools, open challenges, and emerging research. First, we conduct a comprehensive analysis of deep learning techniques developed for two main sub-tasks in iris biometrics: segmentation and recognition. Second, we focus on deep learning techniques for the robustness of iris recognition systems against presentation attacks and via human-machine pairing. Third, we delve deep into deep learning techniques for forensic application, especially in post-mortem iris recognition. Fourth, we review open-source resources and tools in deep learning techniques for iris recognition. Finally, we highlight the technical challenges, emerging research trends, and outlook for the future of deep learning in iris recognition.
... Daugman's algorithm and workflow are still widely utilized in current iris recognition systems. Later, numerous iris feature extraction approaches arose, including variations of the Gabor kernel [31][32][33], the SIFT and SURF-based features [34][35][36], feature fusion methods [37][38][39], and human-in-the-loop methods [40,41]. These methods usually yield a notable performance with few training data. ...
... Since the comparison of paired iris images is more inclined to distance measurement rather than classification, we take MSE with L 2 −norm as the loss function. Experiment results indicate that the MSE loss outperforms the crossentropy (CE) loss and hinge-based loss [40]. The learning objective is the following: ...
Article
Full-text available
Recently, deep learning approaches, especially convolutional neural networks (CNNs), have attracted extensive attention in iris recognition. Though CNN-based approaches realize automatic feature extraction and achieve outstanding performance, they usually require more training samples and higher computational complexity than the classic methods. This work focuses on training a novel condensed 2-channel (2-ch) CNN with few training samples for efficient and accurate iris identification and verification. A multi-branch CNN with three well-designed online augmentation schemes and radial attention layers is first proposed as a high-performance basic iris classifier. Then, both branch pruning and channel pruning are achieved by analyzing the weight distribution of the model. Finally, fast finetuning is optionally applied, which can significantly improve the performance of the pruned CNN while alleviating the computational burden. In addition, we further investigate the encoding ability of 2-ch CNN and propose an efficient iris recognition scheme suitable for large database application scenarios. Moreover, the gradient-based analysis results indicate that the proposed algorithm is robust to various image contaminations. We comprehensively evaluated our algorithm on three publicly available iris databases for which the results proved satisfactory for real-time iris recognition.
... Human-in-loop iris recognition system was developed based on detecting and matching iris crypts. This quality of the image can be easily evaluated [4]. Furthermore, Shen, in 2017, developed the humanin-loop method iris biometric identification which performs iris recognition by detecting and matching iris crypts in iris images [5]. ...
... Then the iris is resizing from the original image ( Figure 2). Pre-processing step is performed to minimise noise [4]. Then the image is converted into grey image which includes the conversion of colour image into grey image (Figure 3). ...
Article
Full-text available
Abstract - Iris recognition is one of the latest biometric authentication processes among all other technologies in biometrics like fingerprints,voice recognition and handprint. In this technology, authentication is based on the patterns of eyes and iris. In this paper, wediscuss iris recognition techniques in brief. Various ways are introduced in due course of time which can enhance the performanceof iris capturing image, leading to better recognition. This paper includes dedicated works of various authors who made itpossible to use this technology in a more sophisticated way. Iris recognition works by simply capturing the image of iris by thesensor which is kept few metres away from face. The captured image is then matched with the stored algorithms and thusconfirms the identification of persons. As the demand of security is increasing day by day, it is widely used for securitypurposes in banks, offices, protection of some documents, home automations and others. The accuracy of iris recognition ishigh as compared with other biometric authentication. Keywords: Iris recognition, Biometric authentication, Algorithms
... The results of the experiments showed that DeepIrisNet models enhance cross-sensor identification accuracy significantly. Chen et al. [28] suggested a new method for automatically recognizing and matching iris crypts. Iris crypts of varied sizes were captured using this detecting approach. ...
Article
Full-text available
Biometrics systems have gotten much press recently because of their use in various fields. Among all biometric approaches, automated personal identification authentication systems based on iris recognition are regarded as the most trustworthy. It is employed in various access control and border security applications because of its high accuracy and uniqueness. Though iris patterns are unique, extrinsic factors such as lighting, camera-eye angle, and sensor interoperability can influence them. So, Iris recognition requires sufficient iris texture visibility to carry out a trustworthy matching. When a textured contact lens covers the iris, a presentation attack or misleading non-match is recorded. However, there are situations when one desires to increase the likelihood of a match even while the iris texture is partially or entirely hidden, such as when a disobedient subject deliberately wears textured contact lenses to mask their identity. So in this way, contact lenses can complicate iris biometrics by obscuring iris patterns and altering inter- and intra-class distributions. This study is aimed to observe the influence of contact lenses on the current technologies in iris recognition systems and highlights their performance. Furthermore, it examines advanced iris recognition systems that employ machine/deep learning-based methodologies and their challenges. All the data is gathered from various online resources, including journals, conferences, survey articles, and many more, and depicts the current state of the iris recognition systems.
... However, there is a lack of user-friendly techniques for iris comparison, and it has not been widely used in forensics applications. Human-in-the-loop systems based on matching and detection of iris crypts have been developed [7], which can capture crypts of different sizes and identify all types of topological changes. Currently, iris recognition is being used in Aadhar card projects. ...
Article
Fingerprints and iris scans are two types of biometric identifiers that serve the purpose of identification and authentication. AADHAR is a project initiated by the Government of India that aims to provide a unique identification number to each resident in the country. Due to the complexity associated with storing these biometric identifiers, a compression technique is employed to minimize storage requirements. The study involves an analysis aimed at identifying the storage demands for biometric data in the AADHAR project. Additionally, a proposed approach that utilizes the K-SVD algorithm, based on sparse representation, is presented to considerably decrease the storage space necessary for storing fingerprints and iris scans.
... Miyazawa [188] proposed an approach based on Discrete Fourier Transforms (DFT). More recently, an iris recognition approach was presented by Chen et al. [46], where the authors built a new set of iris features based on Human-interpreted Crypt Features. ...
Thesis
The growing need for reliable and accurate recognition solutions along with the recent innovations in deep learning methodologies has reshaped the research landscape of biometric recognition. Developing efficient biometric solutions is essential to minimize the required computational costs, especially when deployed on embedded and low-end devices. This drives the main contributions of this work, aiming at enabling wide application range of biometric technologies. Towards enabling wider implementation of face recognition in use cases that are extremely limited by computational complexity constraints, this thesis presents a set of efficient models for accurate face verification, namely MixFaceNets. With a focus on automated network architecture design, this thesis is the first to utilize neural architecture search to successfully develop a family of lightweight face-specific architectures, namely PocketNets. Additionally, this thesis proposes a novel training paradigm based on knowledge distillation (KD), the multi-step KD, to enhance the verification performance of compact models. Towards enhancing face recognition accuracy, this thesis presents a novel margin-penalty softmax loss, ElasticFace, that relaxes the restriction of having a single fixed penalty margin. Occluded faces by facial masks during the recent COVID-19 pandemic presents an emerging challenge for face recognition. This thesis presents a solution that mitigates the effects of wearing a mask and improves masked face recognition performance. This solution operates on top of existing face recognition models and thus avoids the high cost of retraining existing face recognition models or deploying a separate solution for masked face recognition. Aiming at introducing biometric recognition to novel embedded domains, this thesis is the first to propose leveraging the existing hardware of head-mounted displays for identity verification of the users of virtual and augmented reality applications. This is additionally supported by proposing a compact ocular segmentation solution as a part of an iris and periocular recognition pipeline. Furthermore, an identity-preserving synthetic ocular image generation approach is designed to mitigate potential privacy concerns related to the accessibility to real biometric data and facilitate the further development of biometric recognition in new domains.
... anticipate new data in a (possibly fully autonomous) way. In supervised learning, a computer is trained using (positive and/or negative) sample inputs and exact outputs provided by a human supervisor [8][9][10][11][12][13]. The goal is to come up with a rule that can translate new inputs into outputs that are sufficiently general. ...
Article
Individual identity authentication is currently one of the most important components in a wide range of applications. As a consequence, the iris-based biometric technology has piqued the curiosity of many in this field. This high level of interest originates from its high level of accuracy and potentially promising applications. This article focuses on the various studies that have been done on the subject, as well as the various approaches that make up the biometric system based on iris recognition. Methodologies or methods for segmentation, feature extraction, and matching are given special attention. For matching, we employ two different classification methods in our research. ANFIS and k nearest neighbour are the strategies. The iris is located using morphological methods.
... However, the recognition rate is not promising to implement a realtime system. Jianxu Chen et al. [8], have made human interpretable feature like crypt to be detected using morphological operations in a hierarchical manner. And later a multi-dimensional matching method is adopted to overcome the errors due to detection failures. ...
Article
Full-text available
The concern over security in all fields has intensified over the years. The prefatory phase of providing security begins with authentication to provide access. In many scenarios, this authentication is provided by biometric systems. Moreover, the threat of pandemic has made the people to think of hygienic systems which are noninvasive. Iris image recognition is one such noninvasive biometric system that can provide automated authentication. Self-organizing map is an artificial neural network which helps in iris image recognition. This network has the ability to learn the input features and perform classification. However, from the literature it is observed that the performance of this classifier has scope for refinement to yield better classification. In this paper, heterogeneous methods are adapted to improve the performance of the classifier for iris image recognition. The heterogeneous methods involve the application of Gravity Search Optimization, Teacher Learning Based Optimization, Whale Optimization and Gray Wolf Optimization in the training process of the self-organizing map classifier. This method was tested on iris images from IIT-Delhi database. The results of the experiment show that the proposed method performs better.
Article
In this paper, we propose an eye-gaze tracking method based on head orientation estimation that uses a single 60 GHz frequency-modulated continuous-wave (FMCW) radar sensor. The FMCW radar data are acquired for cases, in which the radar is illuminating the front or side of the face. Because the variation in facial muscles caused by eye blinking is more pronounced when a human gazes at the radar, the orientation of the human head can be estimated by analyzing the received radar signal. First, an approximate range-angle map is generated to identify whether a human exists. When a human is detected, the received signal is projected onto the lower subspace of interest. Subsequently, a super-resolution algorithm is applied to the projected signal to obtain a precise target spectrum. The accumulated spectrogram is used as input to MobileNet to classify radar signal images corresponding to different orientations of the human head. The classification results show that the proposed method can identify the orientation of a human head with an accuracy exceeding 90%.
Article
Iris segmentation is a crucial step in iris recognition systems. Iris segmentation in visible wavelength and unconstrained environments is more challenging than ‎segmenting iris images in ideal environments. This paper proposes a new iris segmentation method that exploits the color of human eyes to segment the iris region more accurately. While most of the current iris segmentation methods ignore the color of the iris or deal with ‎grayscale eye images directly, the proposed method takes benefits from iris color to simplify the iris segmentation step. In the first step, we estimate the expected iris center using Haar-like features. The iris color is detected and accordingly, a color-convenient segmentation algorithm is applied to find the iris region. Dealing separately with each iris color set significantly decreases the false segmentation errors and enhances the performance of the iris recognition system. The results of testing the proposed algorithm on the UBIRIS database demonstrate the robustness of our algorithm against different noise factors and non-ideal conditions.
Article
In this survey, we provide a comprehensive review of more than 200 papers, technical reports, and GitHub repositories published over the last 10 years on the recent developments of deep learning techniques for iris recognition, covering broad topics on algorithm designs, open-source tools, open challenges, and emerging research. First, we conduct a comprehensive analysis of deep learning techniques developed for two main sub-tasks in iris biometrics: segmentation and recognition. Second, we focus on deep learning techniques for the robustness of iris recognition systems against presentation attacks and via human-machine pairing. Third, we delve deep into deep learning techniques for forensic application, especially in post-mortem iris recognition. Fourth, we review open-source resources and tools in deep learning techniques for iris recognition. Finally, we highlight the technical challenges, emerging research trends, and outlook for the future of deep learning in iris recognition.
Article
Recognition of individuals in cross-spectral environments involves situations where probe and gallery images are captured in distinct wavelength ranges and hence differ significantly in terms of appearance and illumination. In case of heterogeneous periocular images, alleviating this wide appearance gap and learning to extract illumination-invariant features from such local regions become cumbersome, giving rise to a challenging research problem. In this work, we design a novel holistic feature reconstruction-based attention module (H-FRAM) to refine and generate discriminative convolutional features. In contrast to existing spatial and channel attention mechanisms that compute 2-D or 1-D attention weights respectively, H-FRAM calculates the importance of each feature location by performing multi-linear principal component analysis on full 3-D tensor space. In H-FRAM, we claim that the feature reconstruction error, computed in a holistic manner, plays a crucial role in determining the relevance of feature locations. This reconstruction error is found by projecting the input feature map onto a multi-dimensional eigenspace that captures most of the important variations. To our knowledge, this is the first work which explores subspace learning approach in the context of 3-D attention mechanism to capture discriminative feature information. We have created an in-house cross-spectral periocular dataset containing visible and near-infrared images from 200 classes. The images are captured in unconstrained acquisition setup involving unsupervised eye and head movements as well as accessory variations (face masks and eyeglasses). Extensive experiments and ablation studies show that the proposed network achieves state-of-the-art recognition performances on the existing and in-house datasets for heterogeneous and homogeneous periocular recognition as well as heterogeneous face recognition.
Article
In this paper, a novel framework that fuses the posture data taken by a drone (or unmanned aerial vehicle, UAV) camera and the wearable sensors data recorded by smartwatches is proposed. The framework is designed for continuously tracking persons in a drone view by analyzing location-independent human posture features and correctly tagging smartwatch identities (IDs) and personal profiles to video human objects, thus conquering the former work in requiring ground markers. Person detection, ID assignment, and pose estimation are integrated into our framework to obtain recognized human postures. These recognized postures are then paired with those from the wearable sensors. Through fusing common postures such as standing, walking, jumping, and falling down, person tracking accuracy by UAV up to 95.36% can be attained in our testing scenarios.
Article
For over three decades, the Gabor-based IrisCode approach has been acknowledged as the gold standard for iris recognition, mainly due to the high entropy and binary nature of its signatures. This method is highly effective in large scale environments (e.g., national ID applications), where millions of comparisons per second are required. However, it is known that non-linear deformations in the iris texture, with fibers vanishing/appearing in response to pupil dilation/contraction, often flip the signature coefficients, being the main cause for the increase of false rejections. This paper addresses this problem, describing a customised Deep Learning (DL) framework that: 1) virtually emulates the IrisCode feature encoding phase; while also 2) detects the deformations in the iris texture that may lead to bit flipping, and autonomously adapts the filter configurations for such cases. The proposed DL architecture seamlessly integrates the Gabor kernels that extract the IrisCode and a multi-scale texture analyzer, from where the biometric signatures yield. In this sense, it can be seen as an adaptive encoder that is fully compatible to the IrisCode approach, while increasing the permanence of the signatures. The experiments were conducted in two well known datasets (CASIA-Iris-Lamp and CASIA-Iris-Thousand) and showed a notorious decrease of the mean/standard deviation values of the genuines distribution, at expenses of only a marginal deterioration in the impostors scores. The resulting decision environments consistently reduce the levels of false rejections with respect to the baseline for most operating levels (e.g., over 50% at 1 e -3 FAR values). The source code of the DeepGabor encoder is available at: https://github.com/hugomcp/DeepGabor.
Article
Full-text available
Iris recognition is a secure and best-chosen biometric application in the digital world because of its unique characteristics. Day by day, the digital world plays a significant role in human life for various applications. The applications are vastly spread over secure applications of the nation such as border control applications, criminal investigations, postmortem studies, access the digital equipment, smart homes, smart appliances, smart cars etc. Due to the digitalization of the world, all the research communities, scientists, and industries are focusing on the biometric-based secured iris recognition system. Several researchers have done much work in this domain, but there is still a scope of improvement for various reasons, i.e., less speed and accuracy of the module. The researcher has implemented various algorithms based on traditional and neural network architectures. In this scenario, this paper gives a brief on different techniques and algorithms used by researchers to predict the age of human people using the iris. This paper discussed one hundred and one papers in the literature with various image segmentation, feature extraction and classification of the iris. This paper summarizes publicly available standard databases and various evaluation parameters, i.e., accuracy, precision, recall, f-score, etc. The research community evaluated the age prediction through the iris-based state-of-the-art algorithms with secure prediction, i.e., TPR, TNR, FPR, FNR. Finally, this paper provides the strengths and weaknesses of the various state of art algorithms, respectively, and summarizes the gaps in the existing technology with the scope of improvement.
Article
This work proposes a spherical-orthogonal-symmetric Haar wavelet to decompose and reconstruct spherical iris signals to obtain stronger geometric features of iris surface. It compares its feature extraction abilities of spherical harmonics, semi-orthogonal and nearly orthogonal spherical Haar wavelet. The developed spherical-orthogonal-symmetric Haar wavelet with a convolutional neural network is also proposed for drivers’ iris recognition. It can effectively capture the local fine features of iris spherical surface, and has stronger ability of iris recognition than semi-orthogonal or nearly orthogonal spherical Haar wavelet bases.
Article
Deep learning (DL) based semantic segmentation methods have achieved excellent performance in biomedical image segmentation, producing high quality probability maps to allow extraction of rich instance information to facilitate good instance segmentation. While numerous efforts were put into developing new DL semantic segmentation models, less attention was paid to a key issue of how to effectively explore their probability maps to attain the best possible instance segmentation. We observe that probability maps by DL semantic segmentation models can be used to generate many possible instance candidates, and accurate instance segmentation can be achieved by selecting from them a set of "optimized" candidates as output instances. Further, the generated instance candidates form a well-behaved hierarchical structure (a forest), which allows selecting instances in an optimized manner. Hence, we propose a novel framework, called hierarchical earth mover's distance (H-EMD), for instance segmentation in biomedical 2D+time videos and 3D images, which judiciously incorporates consistent instance selection with semantic-segmentation-generated probability maps. H-EMD contains two main stages. (1) Instance candidate generation: capturing instance-structured information in probability maps by generating many instance candidates in a forest structure. (2) Instance candidate selection: selecting instances from the candidate set for final instance segmentation. We formulate a key instance selection problem on the instance candidate forest as an optimization problem based on the earth mover's distance (EMD), and solve it by integer linear programming. Extensive experiments on eight biomedical video or 3D datasets demonstrate that H-EMD consistently boosts DL semantic segmentation models and is highly competitive with state-of-the-art methods.
Article
We revisit the problem of iris tracking with RGB cameras, aiming to obtain iris contours from captured images of eyes. We find the reason that limits the performance of the state-of-the-art method in more general non-cooperative environments, which prohibits a wider adoption of this useful technique in practice. We believe that because the iris boundary could be inherently unclear and blocked, as its pixels occupy only an extremely limited percentage of those on the entire image of the eye, similar to the stars hidden in fireworks, we should not treat the boundary pixels as one class to conduct end-to-end recognition directly. Thus, we propose to learn features from iris and sclera regions first, and then leverage entropy to sketch the thin and sharp iris boundary pixels, where we can trace more precise parameterized iris contours. In this work, we also collect a new dataset by smartphone with 22 K images of eyes from video clips. We annotate a subset of 2 K images, so that label propagation can be applied to further enhance the system performance. Extensive experiments over both public and our own datasets show that our method outperforms the state-of-the-art method. The results also indicate that our method can improve the coarsely labeled data to enhance the iris contour’s accuracy and support the downstream application better than the prior method.
Article
Full-text available
In this paper, the edge corners (ECs) are proposed as new visible feature points located at the edges of visible iris features such as crypts, pigment spots and stripes. A new technique is developed to segment the iris using the ECs. In addition, an efficient artificial intelligence based fuzzy logic system for the iris recognition stage is used to mitigate the randomness of the iris’s visible features due to pupil dilations and elastic distortions. Iris recognition is achieved by comparing the distribution pattern of the ECs using the proposed fuzzy logic system with four linguistic variables. The first goal is to achieve a high recognition rate with very low computational time. The second goal is to facilitate the use of iris recognition in forensics by using only ECs of the visible features of the iris rather than using full images of those features. Therefore, the proposed fuzzy logic based iris segmentation and recognition (FLISR) system can be used for automatic evaluation and manual verification. In the automatic evaluation, the system finds the best gallery iris image(s) matching the input probe image. Manual verification is done when trained examiners perform independent inspections to determine the best matching iris image. Extensive experiments with different data sets demonstrate the efficiency of the proposed FLISR. In terms of iris segmentation, the iris localization has reached an average accuracy of 99.85%. In addition, the average matching accuracy of the iris recognition has achieved 99.83% with very low computational time as compared to similar algorithms available in the literature.
Article
Full-text available
The demand on devices and systems empowered by biometric verification and identification mechanisms has been increasing in recent years as they have become a significant part of our lives. Palm vein biometric is an emerging technology that has drawn considerable attention from researchers and scientists over the last decade. In the present work, a novel feature extraction methodology named GPWLD combing the Gabor features with positional Weber’s local descriptor (PWLD) features is proposed. WLD is a highly representative micro-pattern descriptor that performs well against noise and illumination changes in images. However, it lacks the ability to capture the vein pattern features at different orientations. Moreover, its descriptor packs the local information content into a single histogram that does not take the spatial locality of micro-patterns into consideration. To solve these two issues, firstly, the palm vein image is passed through Gabor filter with different orientations to capture the salient rotational features found in the output feature maps. Secondly, the spatiality is achieved by uniformly dividing each feature map into several blocks. Next, Weber’s law feature descriptor (WLD) histogram is computed for each block in every feature map. Finally, these histograms are concatenated to compose the final feature vector. Due to the high dimensionality of the final feature vector, Principal component analysis (PCA) algorithm is utilized for feature size reduction. In the classification stage, a deep neural network (DNN) comprising an optimized stacked autoencoder (SAE) with Bayesian optimization and a softmax layer is used. Optimization of the SAE is carried out by using Bayesian optimization to find the optimal SAE structure and the options of the training algorithm. The Experimental results verify the discriminative power of the extracted features and the accuracy of the proposed DNN. For both Identification and verification experiments, the proposed method attains higher identification rate and lower EER than state-of-the-art methods.
Article
Full-text available
In this manuscript, we present score recognition techniques for blowgun game based on computer vision. First, the algorithm detects the position of the target, and calibrates the camera’s parameters. Then the score is calculated with the detection of the dart tip on the target for real-time applications. To improve the robustness, the initial calibration is proposed to record N points as references at the edges of circles, to correct the camera position and angle. This approach can overcome the problems of lighting changes and the camera viewing angle deviation. The fast segmentation and orientation decision is proposed to find the blowgun tip accurately. The two cameras are presented to solve the overlapping problem of multiple darts to improve the detection accuracy. Based on calibration parameters, the distance weighting method is proposed to calculate the score precisely. Experiments result that the accuracy can achieve about 97% to recognize the score, and the processing speed can meet the real-time requirement with software implementation.
Chapter
Full-text available
Iris recognition has been among the most secure and reliable biometric traits, because of its permanent and unique features. Among the various essential modules of an iris recognition framework, feature extraction has been the most sought-for module, where numerous research works have been carried out to yield an effective representation of iris features. This paper is an attempt to propose an improved version of a famous feature descriptor, called Xor-sum code, to obtain an enhanced recognition accuracy. The proposed approach incorporates the curvature information into the conventional Gabor filter, to facilitate discriminative iris feature representation. A rigorous experimentation, with two challenging benchmark iris datasets, has been performed to approve the viability of suggested strategy. The approach proposed under this work is also generalized to work with both the near-infrared and visible wavelength images.
Article
Full-text available
Drones have been applied to a wide range of security and surveillance applications recently. With drones, Internet of Things are extending to 3D space. An interesting question is: Can we conduct person identification (PID) in a drone view? Traditional PID technologies such as RFID and fingerprint/iris/face recognition have their limitations or require close contact to specific devices. Hence, these traditional technologies can not be easily deployed to drones due to dynamic change of view angle and height. In this work, we demonstrate how to retrieve IoT data from users’ wearables and correctly tag them on the human objects captured by a drone camera to identify and track ground human objects. First, we retrieve human objects from videos and conduct coordination transformation to handle the change of drone positions. Second, a fusion algorithm is applied to measure the correlation of video data and inertial data based on the extracted human motion features. Finally, we can couple human objects with their wearable IoT devices, achieving our goal of tagging wearable device data (such as personal profiles) on human objects in a drone view. Our experimental evaluation shows a recognition rate of 99.5% for varying walking paths, and 98.6% when the drone’s camera angle is within 37°. To the best of our knowledge, this is the first work integrating videos from drone cameras and IoT data from inertial sensors.
Article
Multi-sensor data fusion combines various information sources to produce a more accurate or complete description of the environment. This paper studies an object identification (OID) system using multiple distributed cameras and Internet of Things (IoT) devices for better visualizability and reconfigurability. We first propose a data processing and fusing method to merge the detection results of different IoT devices and video cameras, in order to locate, identify, and track target objects in the monitored area. Then, we develop the FusionTalk system by integrating the data fusion techniques with IoTtalk, an IoT device management platform. FusionTalk is designed with flexibility, modularity, and expansibility, where cameras, IoT devices, and network applications are modularized and can be conveniently plugged in/out, reconfigured, and reused through graphical user interfaces. In FusionTalk, the scope and the target of surveillance can be flexibly configured and associated, and administrators can be warned and easily visualize the movement and behavior of specific objects. Our experimental evaluation of the data fusion algorithm in various scenarios shows an identification accuracy above 95%. Finally, theoretical and numerical analyses on the failure probability of pairing IoT devices with video objects by FusionTalk are presented. Extensive experiments are performed to demonstrate the pairing effectiveness in real-world scenarios with failure probability less than 0.01%.
Article
Full-text available
The eye region is one of the most attractive sources for identification and verification due to the representative availability of such biometric modalities as periocular and iris. Many score-level fusion approaches have been proposed to combine these two modalities targeting to improve the robustness. The score-level approaches can be grouped into three categories: transformation-based, classification-based and density-based. Each category has its own benefits, if combined can lead to a robust fusion mechanism. In this paper, we propose a hierarchical fusion network to fuse multiple fusion approaches from transformation-based and classification-based categories into a unified framework for classification. The proposed hierarchical approach relies on the universal approximation theorem for neural networks to approximate each fusion approach as one child neural network and then ensemble them into a unified parent network. This mechanism takes advantage of both categories to improve the fusion performance, illustrated by an improved equal error rate of the multimodal biometric system. We subsequently force the parent network to learn the representation and interaction strategy between the child networks from the training data through a sparse autoencoder layer, leading to further improvements. Experiments on two public datasets (MBGC version 2 and CASIA-Iris-Thousand) and our own dataset validate the effectiveness of the proposed hierarchical fusion approach for periocular and iris modalities.
Article
A common method for the treatment of sports injuries for a long time to provide localized cooling in ice or snow is fomentation. This system will support a feature extraction cloud computing platform application of localized cooling which is applied to the shape of ice snow for treatment of sports injuries. It is a common method of treatment of sports injuries. Athletes usually utilizes the application of therapeutic hypothermia to return to athletic activities.. Two review authors have independently assessed the relevance and impact calculations involved in the trial, and the cloud worker permits the highlights removed legitimately from the scrambled picture. Awareness of disability sport, in particular para snowboard, may not be common or easily available, so it is important that these potential athletes know that the sport exists and where they can try the sport for the first time Some examination and test results show that, for protection trademark picture extraction, proposed technique is successful, safe, and it might be applied to the cloud picture secure figuring applications.
Article
Augmented and virtual reality deployment is finding increasing use in novel applications. Some of these emerging and foreseen applications allow the users to access sensitive information and functionalities. Head Mounted Displays (HMD) are used to enable such applications and they typically include eye facing cameras to facilitate advanced user interaction. Such integrated cameras capture iris and partial periocular region during the interaction. This work investigates the possibility of using the captured ocular images from integrated cameras from HMD devices for biometric verification, taking into account the expected limited computational power of such devices. Such an approach can allow user to be verified in a manner that does not require any special and explicit user action. In addition to our comprehensive analyses, we present a light weight, yet accurate, segmentation solution for the ocular region captured from HMD devices. Further, we benchmark a number of well-established iris and periocular verification methods along with an in-depth analysis on the impact of iris sample selection and its effect on iris recognition performance for HMD devices. To the end, we also propose and validate an identity-preserving synthetic ocular image generation mechanism that can be used for large scale data generation for training purposes or attack generation purposes. We establish the realistic image quality of generated images with high fidelity and identity preserving capabilities through benchmarking them for iris and periocular verification.
Article
Full-text available
Iris Recognition is gaining popularity in various online and offline authentication and multi-model biometric systems. The non-altering and non-obscuring nature of Iris have increased its reliability in authentication systems. The iris images captured in an uncontrolled environment and situation is the challenging issue of the iris recognition. In this paper, a compression robust and KPCA-Gabor fused model is presented to recognize the iris image accurately under these complexities. The illumination and noise robustness is included in this pre-processing stage for gaining the robustness and reliability against complex capturing. The effective compression features are generated as a phase pre-treatment vector using the Logarithmic quantization method. (Kernel Principal Component Analysis) KPCA and Gabor filters are applied to the rectified image for generating the textural features. The compression is also applied to Gabor and KPCA filtered images. The fuzzy adaptive content level fusion is applied to the compression image, KPCA-Compression, and Gabor-Compression iris-image. (K-Nearest Neighbors) KNN based mapping is used to this composite-fused and reduced feature set to recognize the individual. The proposed compression and fusion-feature based model is applied to CASIA-Iris, UBIRIS, and IITD datasets. The comparative evaluations against earlier approaches identify that the proposed model has improved the recognition accuracy and the reduction in error-rate is also achieved.
Article
Full-text available
Post-mortem biometrics entails utilizing the biometric data of a deceased individual for determining or verifying human identity. Due to fundamental biological changes that occur in a person’s biometric traits after death, post-mortem data can be significantly different from ante-mortem data, introducing new challenges for biometric sensors, feature extractors and matchers. This paper surveys research to date on the problem of using iris images acquired after death for automated human recognition. A comprehensive review of existing literature is complemented by a summary of the most recent results and observations offered in these publications. This survey is unique in several elements. Firstly, it is the first publication to consider iris recognition where gallery images are acquired before death (perimortem images) and the probe images are acquired after death from the same subjects. Secondly, results are presented from the largest database of peri-mortem and post-mortem iris images, collected from 213 subjects by two independent institutions located in the U.S. and Poland. Thirdly, post-mortem recognition viability is assessed using more than 20 iris recognition algorithms, ranging from the classic (e.g., Gabor filteringbased) to the modern (e.g., deep learning-based). Finally, we provide a medically informed commentary on post-mortem iris, analyze the reasons for recognition failures, and identify key directions for future research.
Article
In this work, we demonstrates the feasibility of employing the biometric photoplethysmography (PPG) signal for human verification applications. The PPG signal has dominance in terms of accessibility and portability which makes its usage in many applications such as user access control very appealing. Therefore, we developed robust time-stable features using signal analysis and deep learning models to increase the robustness and performance of the verification system with the PPG signal. The proposed system focuses on utilizing different stretching mechanisms namely Dynamic Time Warping, zero padding and interpolation with Fourier transform, and fuses them at the data level to be then deployed with different deep learning models. The designed deep models consist of Convolutional Neural Network (CNN) and Long-Short Term Memory (LSTM) which are considered to build a user specific model for the verification task. We collected a dataset consisting of 100 participants and recorded at two different time sessions using Plux pulse sensor. This dataset along with another two public databases are deployed to evaluate the performance of the proposed verification system in terms of uniqueness and time stability. The final result demonstrates the superiority of our proposed system tested on the built dataset and compared with other two public databases. The best performance achieved from our collected two-sessions database in terms of accuracy is 98% for the single-session and 87.1% for the two-sessions scenarios.
Article
With the evolution of consumer electronics, personal private information stored in the Consumer Electronics (CE) devices is becoming increasingly valuable. To protect private information, biometrics technology is subsequently equipped with the CE devices. Finger-vein is one of the biometric features, which is gaining popularity for identification recently. This article proposes a twofold finger-vein authentication system. The first stage of the identification process uses skeleton topologies to determine the similarities and differences between finger-vein patterns. Some extreme cases with ambiguous features cannot be successfully classified. Hence, the image quality assessment (IQA) is employed as the second stage of the system. Furthermore, to overcome the computational requirement of the algorithm, the GPU is adopted in our system.
Article
In spite of high efficiency of the iris recognition systems, the accuracy of the system degrades with the increase in the size of database, hence there is need for indexing. One of the important characteristics of such an indexing approach is to have high tolerance against feature deviation due to noise. In this work, an indexing approach has been proposed which deals with the feature deviation due to variation in quality of iris images. Further, an efficient index space has been developed using two set of hash functions. During identification, a set of probable bin location for the queried data is decided depending upon the level of noise in the query image. Such a mechanism will help in carrying out efficient searching of the queried data. Next, the list of candidate data set which are very similar to the query data are retrieved from the probable bin locations. Considering retrieval of iris template up to the fifth rank, the proposed approach gives high hit rate even at low penetration rate when tested with, IITD, CASIA-V3 Interval and UBIRIS 2 iris databases compared to the existing approaches.
Article
In recent years, iris recognition is one of the most widely used techniques for person identification. Automatic iris identification implies a comparison of query iris image with iris entries in a large database to determine the identity of the person. In this paper, we propose a straightforward and effective algorithm for the classification of irises into several categories according to the iris texture characteristics. The goal of the classification is to identify and retrieve a smaller subset of the large database and to narrow down the search space. In this way, the response time of the iris recognition system could be significantly improved. We analyzed several cases for dividing the whole database (we used UPOL, CASIA, and UBIRIS databases) into up to eight subsets and calculated the time savings. The simulation results illustrate the potential of the proposed classification method for large-scale iris databases.
Article
Iris segmentation is a critical step for improving the accuracy of iris recognition, as well as for medical concerns. Existing methods generally use whole eye images as input for network learning, which do not consider the geometric constrain that iris only occur in a specific area in the eye. As a result, such methods can be easily affected by irrelevant noisy pixels outside iris region. In order to address this problem, we propose the ATTention U-Net (ATT-UNet) which guides the model to learn more discriminative features for separating the iris and non-iris pixels. The ATT-UNet firstly regress a bounding box of the potential iris region and generated an attention mask. Then, the mask is used as a weighted function to merge with discriminative feature maps in the model, making segmentation model pay more attention to iris region. We implement our approach on UBIRIS.v2 and CASIA.IrisV4-distance, and achieve mean error rates of 0.76% and 0.38%, respectively. Experimental results show that our method achieves consistent improvement in both visible wavelength and near-infrared iris images with challenging scenery, and surpass other representative iris segmentation approaches.
Article
Full-text available
This study proposes an applicable driver identification method using machine learning algorithms with driving information. The driving data are collected by a 3-axis accelerometer, which records the lateral, longitudinal and vertical accelerations. In this research, a data transformation way is developed to extract interpretable statistics features from raw 3-axis sensor data and utilise machine learning algorithms to identify drivers. To eliminate the bias caused by the sensor installation and ensure the applicability of their approach, they present a data calibration method which proves to be necessary for a comparative test. Four basic supervised classification algorithms are used to perform on the data set for comparison. To improve classification performance, they propose a multiple classifier system, which combines the outputs of several classifiers. Experimental results based on real-world data show that the proposed algorithm is effective on solving driver identification problem. Among the four basic algorithms, random forests (RFs) algorithm has the greatest performance on accuracy, recall and precision. With the proposed multiple classifier system, a greater performance can be achieved in small number of drivers' groups. RFs algorithm takes the lead in running speed. In their experiment, ten drivers are involved and over 5,500,000 driving records per driver are collected.
Article
Full-text available
Scale-invariant feature transform (SIFT), which represents a general purpose image descriptor, has been extensively used in the field of biometric recognition. Focusing on iris biometrics, numerous SIFT-based schemes have been presented in past years, offering an alternative approach to traditional iris recognition, which are designed to extract discriminative binary feature vectors based on an analysis of pre-processed iris textures. However, the majority of proposed SIFT-based systems fails to maintain the recognition accuracy provided by generic schemes. Moreover, traditional systems outperform SIFT-based approaches with respect to other key system factors, i.e. authentication speed and storage requirement. In this work, we propose a SIFT-based iris recognition system, which circumvents the drawbacks of previous proposals. Prerequisites, derived from an analysis of the nature of iris biometric data, are utilized to construct an improved SIFT-based baseline iris recognition scheme, which operates on normalized enhanced iris textures obtained from near-infrared iris images. Subsequently, different binarization techniques are introduced and combined to obtain binary SIFT-based feature vectors from detected keypoints and their descriptors. On the CASIAv1, CASIAv4-Interval and BioSecure iris database, the proposed scheme maintains the performance of different traditional systems in terms of recognition accuracy as well as authentication speed. In addition, we show that SIFT-based features complement those extracted by traditional schemes, such that a multi-algorithm fusion at score level yields a significant gain in recognition accuracy.
Article
Full-text available
The iris is regarded as one of the most useful traits for biometric recognition and the dissemination of nationwide iris-based recognition systems is imminent. However, currently deployed systems rely on heavy imaging constraints to capture near infrared images with enough quality. Also, all the public available iris image databases contain data correspondent to such imaging constraints and, therefore, are exclusively suitable to evaluate methods thought to operate on these type of environments. The main purpose of this paper is to announce the availability of the UBIRIS.v2 database, a multi-session iris images database which singularly contains data captured in the visible wavelength, at-a-distance (between four and eight meters) and on on-the-move. This database is free available for researchers concerned about visible wavelength iris recognition and will be useful in accessing the feasibility and specifying the constraints of this type of biometric recognition.
Conference Paper
Full-text available
Most iris recognition systems use the global and local texture information of the iris in order to recognize individuals. In this work, we investigate the use of macro-features that are visible on the anterior surface of RGB images of the iris for matching and retrieval. These macro-features correspond to structures such as moles, freckles, nevi, melanoma, etc. and may not be present in all iris images. Given an image of a macro-feature, the goal is to determine if it can be used to successfully retrieve the associated iris from the database. To address this problem, we use features extracted by the Scale-Invariant Feature Transform (SIFT) to represent and match macro-features. Experiments using a subset of 770 distinct irides from the Miles Research Iris Database suggest the possibility of using macro-features for iris characterization and retrieval.
Article
Full-text available
Iris recognition imaging constraints are receiving increasing attention. There are several proposals to develop systems that operate in the visible wavelength and in less constrained environments. These imaging conditions engender acquired noisy artifacts that lead to severely degraded images, making iris segmentation a major issue. Having observed that existing iris segmentation methods tend to fail in these challenging conditions, we present a segmentation method that can handle degraded images acquired in less constrained conditions. We offer the following contributions: 1) to consider the sclera the most easily distinguishable part of the eye in degraded images, 2) to propose a new type of feature that measures the proportion of sclera in each direction and is fundamental in segmenting the iris, and 3) to run the entire procedure in deterministically linear time in respect to the size of the image, making the procedure suitable for real-time applications.
Conference Paper
Full-text available
We introduce a new distance between two distributions that we call the Earth Mover's Distance (EMD), which reflects the minimal amount of work that must be performed to transform one distribution into the other by moving “distribution mass” around. This is a special case of the transportation problem from linear optimization, for which efficient algorithms are available. The EMD also allows for partial matching. When used to compare distributions that have the same overall mass, the EMD is a true metric, and has easy-to-compute lower bounds. In this paper we focus on applications to image databases, especially color and texture. We use the EMD to exhibit the structure of color-distribution and texture spaces by means of Multi-Dimensional Scaling displays. We also propose a novel approach to the problem of navigating through a collection of color images, which leads to a new paradigm for image database search
Book
Tenth edition. For over four decades, Introduction to Operations Research by Frederick Hillier has been the classic text on operations research. While building on the classic strengths of the text, the author continues to find new ways to make the text current and relevant to students. One way is by incorporating a wealth of state-of-the-art, user-friendly software and more coverage of business applications than ever before. The hallmark features of this edition include clear and comprehensive coverage of fundamentals, an extensive set of interesting problems and cases, and state-of-the-practice operations research software used in conjunction with examples from the text. 1. Introduction -- 2. Overview of the operations research modeling approach -- 3. Introduction to linear programming -- 4. Solving linear programming problems: the simplex method -- 5. The theory of the simplex method -- 6. Duality theory -- 7. Linear programming under uncertainty -- 8. Other algorithms for linear programming -- 9. The transportation and assignment problems -- 10. Network optimization models -- 11. Dynamic programming -- 12. Integer programming -- 13. Nonlinear programming -- 14. Metaheuristics -- 15. Game theory -- 16. Decision analysis -- 17. Queuing theory -- 18. Inventory theory -- 19. Markov decision processes -- 20. Simulation -- Appendix 1. Documentation for the OR courseware -- Appendix 2. Convexity -- Appendix 3. Classical optimization methods -- Appendix 4. Matrices and matrix operations -- Appendix 5. Table for a Normal Distribution -- Partial answers to selected problems.
Article
The iris is a stable biometric that has been widely used for human recognition in various applications. However, official deployment of the iris in forensics has not been reported. One of the main reasons is that the current iris recognition techniques in hard to visually inspect by examiners. To further promote the maturity of iris recognition in forensics, one way is to make the similarity between irises visualizable and interpretable. Recently, a humanin- the-loop iris recognition system was developed, based on detecting and matching iris crypts. Building on this framework, we propose a new approach for detecting and matching iris crypts automatically. Our detection method is able to capture iris crypts of various sizes. Our matching scheme is designed to handle potential topological changes in the detection of the same crypt in different acquisitions. Our approach outperforms the known visible feature based iris recognition method on two different datasets, by over 19% higher rank one hit rate in identification and over 46% lower equal error rate in verification.
Conference Paper
Iris recognition is one of the most reliable biometric technologies for identity recognition and verification, but it has not been used in a forensic context because the representation and matching of iris features are not straightforward for traditional iris recognition techniques. In this paper we concentrate on the iris crypt as a visible feature used to represent the characteristics of irises in a similar way to fingerprint minutiae. The matching of crypts is based on their appearances and locations. The number of matching crypt pairs found between two irises can be used for identity verification and the convenience of manual inspection makes iris crypts a potential candidate for forensic applications.
Conference Paper
This paper presents an improved framework for iris crypt detection and matching that outperforms both previous methods and manual annotations. The system uses a multi-scale pyramid architecture to detect feature candidates before they are further examined and optimized by heuristic-based methods. The dissimilarity between irises are measured by a two-stage matcher in the simple to complex order. The first stage estimates the global dissimilarity and rejects the majority of unmatching candidates. The surviving pairs are matched by local dissimilarities between each crypt pair using shape descriptors. The proposed framework showed significant performance improvement in both identification and verification context.
Conference Paper
Tracking the motion of Myxococcus xanthus is a crucial step for fundamental bacteria studies. Large number of bacterial cells involved, limited image resolution, and various cell behaviors (e.g., division) make tracking a highly challenging problem. A common strategy is to segment the cells first and associate detected cells into moving trajectories. However, known detection association algorithms that run in polynomial time are either ineffective to deal with particular cell behaviors or sensitive to segmentation errors. In this paper, we propose a polynomial time hierarchical approach for associating segmented cells, using a new Earth Mover's Distance (EMD) based matching model. Our method is able to track cell motion when cells may divide, leave/enter the image window, and the segmentation results may incur false alarm, detection lost, and falsely merged/split detections. We demonstrate it on tracking M. xanthus. Applied to error-prone segmented cells, our algorithm exhibits higher track purity and produces more complete trajectories, comparing to several state-of-the-art detection association algorithms.
Article
Ordinal measures have been demonstrated as an effective feature representation model for iris and palmprint recognition. However, ordinal measures are a general concept of image analysis and numerous variants with different parameter settings such as location, scale, orientation etc. can be derived to construct a huge feature space. This paper proposes a novel optimization formulation for ordinal feature selection with successful applications to both iris and palmprint recognition. The objective function of the proposed feature selection method has two parts, i.e., misclassification error of intra- and inter-class matching samples and weighted sparsity of ordinal feature descriptors. Therefore the feature selection aims to achieve an accurate and sparse representation of ordinal measures. And the optimization subjects to a number of linear inequality constraints, which require that all intra- and inter-class matching pairs are well separated with a large margin. Ordinal feature selection is formulated as a linear programming (LP) problem so that a solution can be efficiently obtained even on a large-scale feature pool and training database. Extensive experimental results demonstrate that the proposed LP formulation is advantageous over existing feature selection methods such as mRMR, ReliefF, Boosting and Lasso for biometric recognition, reporting state-of-the-art accuracy on CASIA and PolyU databases.
Conference Paper
We are not aware of any previous systematic investigation of how well human examiners perform at identity verification using the same type of images as acquired for automated iris recognition. This paper presents results of an experiment in which examiners consider a pair of iris images to decide if they are either (a) two images of the same eye of the same person, or (b) images of two different eyes, with the two different individuals having the same gender, ethnicity and approximate age. Results suggest that novice examiners can readily achieve accuracy exceeding 90% and can exceed 96% when they judge their decision as “certain”. Results also suggest that examiners may be able to improve their accuracy with experience.
Conference Paper
We conducted an experiment in which participants were asked to annotate the crypts they see in iris images. The results were used to assess the utility of crypts in identity recognition for forensic applications. Although the inter-participant annotation consistency may be limited by crypts' noticeability and the range of genuine similarity scores obtained by the adopted matcher is correlated to the number of crypt pixels in the given image, the intra-iris crypt perception is sufficiently consistent for nearly 95% of query images to be correctly identified. The performance is expected to be better for the professional examiners with comprehensive training.
Conference Paper
Iris recognition is one of the most reliable biometric technologies for identity recognition and verification. Most iris recognition methods represent the iris texture using the output of band-pass filters, and compare irises by matching the filter outputs. In this paper we explore iris recognition based on two visible features, crypts and anti-crypts. The experiments demonstrate that crypts and anti-crypts are stable, consistent, and have good discriminatory ability over different eyes. The visible feature-based iris representation and matching are easily recognizable to human eyes and have the potential to be used in in the field of forensics.
Conference Paper
This paper describes the Iris Challenge Evaluation (ICE) 2005. The ICE 2005 contains a dataset of 2953 iris images from 132 subjects. The data is organized into two experiments: right and left eye. Iris recognition performance is presented for twelve algorithms from nine groups that participated in the ICE 2005. For the top performers, verification rate on the right iris is above 0.995 at a false accept rate of 0.001. For the left iris, the corresponding verification rates are between 0.990 and 0.995 at a false accept rate of 0.001. The results from the ICE 2005 challenge problem were the first to observe correlations between the right and left irises for match and non-match scores, and quality measures.
Conference Paper
We present a new approach based on morphological operators for application of biometric identification of individuals by segmentation and analysis of the iris. Algorithms based on morphological operators are developed to segment the iris region from the eye image and also to highlight chosen iris patterns. The extracted features are used to represent and characterize the iris. In order to properly extract the desired patterns, an algorithm is proposed to produce skeletons with unique paths among end-points and nodes. The representation obtained by the morphological processing is stored for identification purposes. To illustrate the efficiency of the morphological approach, some results are presented. The proposed system was derived to present low complexity implementation and low storage requirements.
Article
Algorithms developed by the author for recognizing persons by their iris patterns have now been tested in many field and laboratory trials, producing no false matches in several million comparison tests. The recognition principle is the failure of a test of statistical independence on iris phase structure encoded by multi-scale quadrature wavelets. The combinatorial complexity of this phase information across different persons spans about 249 degrees of freedom and generates a discrimination entropy of about 3.2 b/mm2 over the iris, enabling real-time decisions about personal identity with extremely high confidence. The high confidence levels are important because they allow very large databases to be searched exhaustively (one-to-many "identification mode") without making false matches, despite so many chances. Biometrics that lack this property can only survive one-to-one ("verification") or few comparisons. The paper explains the iris recognition algorithms and presents results of 9.1 million comparisons among eye images from trials in Britain, the USA, Japan, and Korea.
Article
Recent large-scale deployments of iris recognition for border-crossing controls enable critical assessment of the robustness of this technology against making false matches, since vast numbers of cross comparisons become possible within large databases. This paper presents results from the 200 billion iris cross comparisons that could be performed within a database of 632 500 different iris images, spanning 152 nationalities. Each iris pattern was encoded into a phase sequence of 2048 bits using the Daugman algorithms. Empirically analyzing the tail of the resulting distribution of similarity scores enables specification of decision thresholds, and prediction of performance, of the iris recognition algorithms if deployed in identification mode on national scales
Latent prints: A perspective on the state of the science
  • peterson
A visually interpretable iris recognition system with crypt features
  • shen
The state of the art in algorithms, fast identification solutions and forensic applications
  • nobel