Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Ears have rich structural features that are almost invariant with increasing age and facial expression variations. Therefore ear recognition has become an effective and appealing approach to non-contact biometric recognition. This paper gives an up-to date review of research works on ear recognition. Current 2D ear recognition approaches achieve good performance in constrained environments. However the recognition performance degrades severely under pose, lighting and occlusion. This paper proposes a 2D ear recognition approach based on local information fusion to deal with ear recognition under partial occlusion. Firstly, the whole 2D image is separated to sub-windows. Then, Neighborhood Preserving Embedding is used for feature extraction on each sub-window, and we select the most discriminative sub-windows according to the recognition rate. Each sub-window corresponds to a sub-classifier. Thirdly, a sub-classifier fusion approach is used for recognition with partially occluded images. Experimental results on the USTB ear dataset and UND dataset have illustrated that using only few sub-windows we can represent the most meaningful region of the ear, and the multi-classifier model gets higher recognition rate than using the whole image for recognition.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... According to experimental results in USTB and UND databases, the sub windows method can help us to identify the best ear areas in the future. Extracted results are related to data with a occlusion of 33%in sides and 50% first grade of identification is related to 24 th sub class and the 98% rate of accuracy and has a87% rate of accuracy in UND database [4] .Yuan and his cohorts (2016) introduced an occlusion analyser for codification of occluded areas on image. Exhibiting dispersion based on classification can show a functional way to identifying ear under occlusion. ...
... In (3-7), (3)(4)(5)(6)(7)(8), (3)(4)(5)(6)(7)(8)(9) , are 2 feature vectors assessed by Minkowski , city block and Euclidean standards. Parameter p is the degree of the Murkowski function which will result the Euclidean scale if p=2. ...
... In (3-7), (3)(4)(5)(6)(7)(8), (3)(4)(5)(6)(7)(8)(9) , are 2 feature vectors assessed by Minkowski , city block and Euclidean standards. Parameter p is the degree of the Murkowski function which will result the Euclidean scale if p=2. ...
Article
Full-text available
The ear has rich structural features which often don't change during age increasing and with changes of facial expressions. These features also can be used as a biometrical method, without any direct connection with target. Although an efficient function is achieved by existing ear identification methods under controlled situation but it has decreased because of some destructive factors such as occlusion in the human identification accuracy. The main propose Of this article is presenting a new way to dominance on limitations resulted by the occlusion of images. Hidden Markov model, wavelet components and their statistical control are used. In feature extraction, a 3-layer neural network, a 3 layer Layer neural network is used to mate each image with related people's identification tag. In addition we used nearest neighbour method in feature vector to allocate images to real or fake individuals in purpose of being able on comparing 2 different identification comparative methods. USTB database including 308 images (4 image for each person), is used in order to reaching the best rate of accuracy.
... The global biometric market is growing rapidly due to the enhanced recognition accuracy of biometric-based systems and because of the numerous incidents of security breaches of traditional password-based systems on the other hand [9], [24]. Over the last few years, ear biometric has emerged to be as useful as face biometric for automated person identification and verification [1], [2]. Several unique advantages of ear biometric have been reported so far. ...
... For instance, ear biometric is neither sensitive to facial expression changes nor it is tend to be altered by makeup [3]. Ear possesses distinctive features which remains nearly constant throughout the lifetime of a person [2]. Unlike face, ear biometric has been utilized to recognize identical twins because of its discriminant characteristics [4]. ...
... Recently, in a review paper [7] on ear biometrics, the authors pointed out that there is a lack of studies on realworld ear occlusions. In 2002, Yuan and Mu [2] proposed a local information fusion-based ear recognition method to obtain robustness under partial ear occlusion. However, results showed that the recognition performance of this method varied according to the location as well as amount of occlusions. ...
... This approach was successfully applied in XM2VTS database with a 91.5% recognition rate. Yuan and Mu (2012) represented ear recognition system under variation of pose, lighting, and occlusion. The input is 2D ear image. ...
... The weighted sum is utilized for the result level fusibility; the more distinguished sub-classifier gets a higher weight. Before the fusibility, the match scores of each sub-classifier are transformed into a common domain using the z-score normalization (Yuan & Mu, 2012). We are planning to apply our proposed system by choosing three images from each ear sides making a total of six images for each person and arrange it sequentially (1R, 1L) (1R, 2L) (1R, 3L) (2R, 1L)…. ...
Article
Full-text available
Human ear is an intriguing individual anatomical part of a passive, physiological biometrics systems based on the image acquired from digital cameras. The human ear has many singular features that permit the finding of particular individuals. It could be implemented as effective biometrics systems, for example, in crowd surveillance and identifying terrorist at public places such as airports, as well as controlling access to government offices. Challenges and issues faced in the field of human identification using ear detection and recognition includes problem of occlusion, varieties in illumination, and real-time implementation in accessing information from an integrated database system with higher accuracy. Research on ear detection and recognition systems have been developing in a steady rate and mostly are constrained to controlled indoor environment. Notwithstanding, other research issues incorporate; ear print criminology; ear symmetry; and ear uniqueness. This paper presents a review of existing biometric system based on ear features and proposes a novel hybrid ear recognition framework for the advancement of passive human identification technology. The aim of this work is to build a passive identification system for hybrid ear biometric from digital image database that is collected from two types of identifiers (right-left ear of the same person).
... EEG signals can be a great option for biometric identification due to that the brain electrical activity is unique in each individual and besides closely related with the visual, mood, auditory stimuli and in general any stimulus experienced by the Table 1. Monomodal and multimodal biometric studies Multimodal References Monomodal References EEG-EOG [1] EEG [2][3][4][5][6] EKG-EMG [8] EKG [9][10][11][12][13][14][15][16] EKG-PCG [17] FKP [18,19] EKG, face, fingerprint [20] Iris [21][22][23] PCG and EKG [24] Face [25][26][27][28] EKG and physical activity [29] PCG [30] EEG and face [31] Palmprint [32,33] Face and palmprint [34] Periocular [35] Face and iris [36][37][38][39] Fingerprint [40][41][42] Face and ear [43] Tongue [44] Face and lips [45] Gait [46] EKG and LDV [47] Scleral [48] Fingerprint and palmprint [49] KnA [50] Ear [51] Eyes [52] LDV [53] Finger vein [54] Speech [55] PPG [56] person. Therefore is necessary to consider that the cerebral response to any of these stimuli is different for each individual causing that EEG signals are difficult of supplant and get, therefore this signals are practical in biometrics. ...
... Although the present work shows that there is great potential in the used of PS to the biometric recognition, is important that the Signals Data fusion techniques Ref Face and Lips Feature level fusion Score level fusión: based on the likelihood function [45] LDV Data fusion: This method models the intersession variability by the variance (as the log-normal model) Information fusion: train single session based models and separately extract model-dependent informative components. (feature fusion) [53] Face Information fusión: the sum of the scores, AND, OR of the decisions (feature fusion) [71] Tongue Sum rule, product rule, median rule (feature fusion) [44] Ear Score level fusion: weighted sum rule (classification fusion) [51] Finger-palmprint Feature level fusion [49] Palm-print Weighted sum rule, Neyman-Pearson rule, heuristic rule, proposed heuristic rule. (feature fusion) [33] EEG-gyroscope Score level fusion: Mean, product, maximum and minimum rules Decision level fusión: Logical AND method, Logical OR method [73] Scleral Score level fusion-sum rule, min rule, max rule (feature fusion) [48] FKP Sum rule, min rule (feature fusion) [19] Face-Iris Decision level fusion-weighted sum rule, sum rule [39] Face-Iris Feature level fusion Score level fusión: weighted sum rule [38] Eyes Weights mean fusion (Feature fusion) [52] future analysis is performed with a grouping largest set of signals. ...
Article
Full-text available
There is a growing interest in data fusion oriented to identification and authentication from biometric traits and physiological signals, because of its capacity for combining multiple sources and multimodal analysis allows improving the performance of these systems. Thus, we considered necessary make an analytical review on this domain. This paper summarizes the state of the art of the data fusion oriented to biometric authentication and identification, exploring its techniques, benefits, advantages, disadvantages, and challenges.
... Recognition rate rank1 is dependent on 28 sub class extracted from 28 sub window of each image. 78% recognition rate is achieved in rank1 of 24 th sub class of USTB database with 50% occlusion, with the same occlusion in UND database in 19 th sub class 72% rate is achieved [19]. Fig .6Ear ...
... Fig .6Ear image is divided into sub windows [19]. ...
Article
Full-text available
In this paper, person identification studying recently has received significant attention. There are several reasons for this trend of ear recognition. Ear recognition is biometric and non-contact way like face recognition that doesn’t suffer people. Also ears can be used to identify people in surveillance images where the face may be completely occluded. Furthermore ear appearance changes a little by the age. But in reality ear image may be occluded little or completely. Therefore these occlusions that may occur during the identification finally can lead to a decline in efficiency of person identification. We provide a literature review of recent research in ear recognition and detection. Besides a review in literature this study can be an exploit for researchers who might use a new approach. The problem of identifications such as occlusion of ear image and their countermeasure to solve them will be studied in this paper
... The human ear has several advantages over other modalities: it has a rich structure, it is a smaller object (low resolution), it is stable over time, it is a modality that people accept, it is unaffected by changes in age, facial expressions, position, and rotation, and ear images can be acquired from a distance without the subject's participation. [3]. Like the face, the ear has regular traits and does not have a completely random structure. ...
... Yuan and Mu [81] (2012) developed an ear identification method focused on local information fusion to overcome ear iden-tification with incomplete occlusion. They firstly divided the 2D ear picture into several blocks. ...
Article
Automatic identity recognition from ear images is an active research topic in the biometric community. The ability to secretly acquire images of the ear remotely and the stability of the ear shape over time make this technology a promising alternative for surveillance, authentication, and forensic applications. In recent years, significant research has been conducted in this area. Nevertheless, challenges remain that limit the commercial use of this technology. Several phases of the ear recognition system have been studied in the literature, from ear detection, normalization, and feature extraction to classification. This paper reviews the most recent methods used to describe and classify biometric features of the ear. We propose a first taxonomy to group existing approaches to ear recognition, including 2D, 3D, and combined 2D and 3D methods, as well as an overview of historical advances in this field. It is well known that data and algorithms are the essential components in biometrics, particularly in-ear recognition. However, early ear recognition datasets were very limited and collected in laboratory with controlled environments. With the wider use of deep neural networks, a considerable amount of training data has become necessary if acceptable ear recognition performance is to be achieved. As a consequence, current ear recognition datasets have increased significantly in size. This paper gives an overview of the chronological evolution of ear recognition datasets and compares the performance of conventional vs. deep learning methods on several datasets. We proposed a second taxonomy to classify the existing databases, including 2D, 3D, and video ear datasets. Finally, some open challenges and trends are debated for future research.
... A local fusion of data assisted by a 2D ear recognition method is typically recommended in [13] to investigate the issues related to partial occlusion-based recognition. The image is split into sub-windows; the nearby retaining encoding can then be used to retrieve the feature of each subwindow, and the most discriminatory sub-windows are thus decided to support the popularity rate. ...
... At Studies in biometric explains that whatever you make a high accuracy biometric recognition system, having PAD is necessary on your recognition system [1][2][3]. Moreover, it is still required to improve ear recognition system with different ideas like using multimodal biometric which is proposed in this paper. ...
Preprint
Ear recognition system has been widely studied whereas there are just a few ear presentation attack detection methods for ear recognition systems, consequently, there is no publicly available ear presentation attack detection (PAD) database. In this paper, we propose a PAD method using a pre-trained deep neural network and release a new dataset called Warsaw University of Technology Ear Dataset for Presentation Attack Detection (WUT-Ear V1.0). There is no ear database that is captured using mobile devices. Hence, we have captured more than 8500 genuine ear images from 134 subjects and more than 8500 fake ear images using. We made replay-attack and photo print attacks with 3 different mobile devices. Our approach achieves 99.83% and 0.08% for the half total error rate (HTER) and attack presentation classification error rate (APCER), respectively, on the replay-attack database. The captured data is analyzed and visualized statistically to find out its importance and make it a benchmark for further research. The experiments have been found out a secure PAD method for ear recognition system, publicly available ear image, and ear PAD dataset. The codes and evaluation results are publicly available at https://github.com/Jalilnkh/KartalOl-EAR-PAD.
... Recently, with the appearance of more and more available ear image databases, human ear recognition techniques have won more and more attentions for human recognition. Traditionally, human ear recognition methods are based on geometrical features [4,5] and [6], (global and local) appearance features [7][8][9] and [10], and fusion features [11,12] for extracting ear image features. More recently, deep con-volutional neural network (CNN) features have been used for many applications [13][14][15][16] and [17] and started for ear recognition problem with increasing and available ear image databases. ...
Article
Full-text available
Recently, deep convolutional neural networks (CNNs) have been used for ear recognition with the increasing and available ear image databases. However, most known ear recognition methods may be affected by selecting and weighting features; this is always a challenging issue in ear recognition and other pattern recognition applications. Metric learning can address this issue by using an accurate and efficient metric distance called Mahalanobis distance. Therefore, this paper presents a novel approach for ear recognition problems based on a learning Mahalanobis distance metric on deep CNN features. In detail, firstly, various deep features are extracted by adopting VGG and ResNet pre-trained models. Secondly, the discriminant correlation analysis is exploited to eliminate the dimensionality problem. Thirdly, the Mahalanobis distance is learned based on LogDet divergence metric learning. Finally, K-nearest neighbor is used for ear recognition. The experiments are performed on four public ear databases: AWE, USTB II, AMI, and WPUT, and experimental results prove that the proposed approach outperforms the existing state-of-the-art ear recognition methods.
... In [63], a 2D ear recognition approach is proposed based on local information fusion to solve the problem of ear recognition under partial occlusion. The whole image is divided into sub-windows; then, neighborhood preserving embedding is utilized to extract each sub-window's feature and the most discriminative sub-windows is selected based on the recognition rate. ...
Article
Full-text available
Extraction and description of image features is an active research topic and important for several applications of computer vision field. This paper presents a new noise-tolerant and rotation-invariant local feature descriptor called robust local oriented patterns (RLOP). The proposed descriptor extracts local image structures utilizing edge directional information to provide rotation-invariant patterns and to be effective in noise and changing illumination situations. This is achieved by a non-linear amalgamation of two specific strategies; binarizing neighborhood pixels of a patch and assigning binomial weights in the same formula. In the encoding methodology, the neighboring pixels is binarized with respect to the mean value of pixels in an image patch of size 3 × 3 instead of the center pixel. Thus, the obtained codes are rotation-invariant and more robust against noise and other non-monotonic grayscale variations. Ear recognition is considered as the representative problem, where the ear involves localized patterns and textures. The proposed descriptor encodes all images’ pixels and the resulting RLOP-encoded image is divided into several regions. Histograms of the regions are constructed to estimate the distribution of features. Then, all histograms are concatenated together to form the final descriptor. The robustness and effectiveness of the proposed descriptor are evaluated through conducting several identification and verification experiments on four different ear databases: IIT Delhi-I, IIT Delhi-II, AMI, and AWE. It is observed that the proposed descriptor outperforms the state-of-the-art texture based approaches achieving a recognition rate of 98% on the average providing the best performance among the tested descriptors.
... Marsico et al. (2016) proposed HERO (human ear recognition against occlusions) technique and achieved recognition rate ranged from 90% to 98% for unoccluded ear images and after modified the test images by superimposing synthetic square occlusions recognition rate decreased to ranged from 84% to 97%. Yuan and Mu (2012) achieved 94% accuracy for unoccluded images and 80% accuracy for 35% occlusion due to hair by designing a 2D ear recognition-based system on local information. To deal with partial occlusion Zhang et al. (2013) proposed a feature extraction method using scale information of Gabor wavelets and achieved success rate of 96%, 91% and 86% for occlusion of 15%, 25% and 35% respectively. ...
... Among different kind of biometric identifiers, handbased biometrics has been fascinated the attention over contemporary years. Likewise fingerprint [2][3][4][5][6] palm print [7][8][9] hand geometry, hand vein [10] inner knuckle print [9,11] has been investigated in this topic. In massive of last few years human finger knuckle print (FKP) and EAR recognition is becoming an interesting scenario in biometric authentication. ...
Article
Full-text available
Biometrics is the bureau of science that helps to measure the individual's features by utilising their behavioural and physiological characteristics. Since years ago biometric technology are contemplated to be a unique tool for other security purposes. Inauspicious biometrics, Ear recognition, and Finger Knuckle Print have been enchanted as a booming analysis with interests among several researchers in modern periods. It has an ample range of private and other law enforcement applications. A combination of multiple human attributes is authenticated and considered to be a competent strategy in case of a multimodal Personal authentication system. In this paper extracting the local and global features via the structure of the time‐frequency domain has been studied. This proposed scheme exploits the analysis of dual biometric modalities i.e. Ear and Finger Knuckle Print which are carried out at the stage of feature‐level fusion. The feature Extraction two biometric patterns are obtained by generating the Local and Global feature information that helps in refining the alignment of dual biometric images in matching i.e. Discrete Orth normal Stockwell Transform‐Ear recognition and Band Limited phase‐only correlation with Finger Knuckle Print. Experiment results conducted with these FKP and EAR are demonstrated in improving recognition of efficient accuracy.
... Choras [11] detected the contour of the ear region by finding the difference between maximum value and minimum value of pixel intensity and then compared it against the threshold, followed by Geometric feature extraction that considers only a few geometric features. In [12], improved Adaboost algorithm was employed for ear detection followed by feature extraction from ear image using Neighborhood Preserving Embedding (NPE) algorithm. It is observed that some part of ear region poses higher amount of information, especially the inner ear region. ...
... They are considered more accurate than fingerprint systems, although a fake iris can also be manufactured from a person's photograph by using high-resolution printer and normal contact lens [3], [4]. To address these challenges, researchers have been seeking new alternatives including palm print [5], vein patterns [6], [7], ear contour [8], finger-knuckle-print [9], nose pore [10], and combined approaches of multimodal biometrics [11]. Nevertheless, these technologies still depend on the structural characteristics of the acquired image and are therefore susceptible to forgery using forged biometric traits. ...
Article
Full-text available
Current biometrics rely on images obtained from the structural information of physiological characteristics, which is inherently a fatal problem of being vulnerable to spoofing. Here, we studied personal identification using the frequency-domain information based on human body vibration. We developed a bioacoustic frequency spectroscopy system and applied it to the fingers to obtain information on the anatomy, biomechanics, and biomaterial properties of the tissues. As a result, modulated microvibrations propagated through our body could capture a unique spectral trait of a person and the biomechanical transfer characteristics persisted for two months and resulted in 97.16% accuracy of identity authentication in 41 subjects. Ultimately, our method not only eliminates the practical means of creating fake copies of the relevant characteristics but also provides reliable features.
... Under these challenges mentioned above, researchers have been seeking new alternatives to existing methods. Many novel and unconventional features, such as ear contour 9 , palm print 10,11 , nose pore 12,13 , vein patterns 14,15 , finger-knuckle-print 16,17 , and multimodal approaches have been adopted for development as new biometrics methods 18,19 . In recent years, new approaches such as biomedical engineering technologies have been proposed to provide non-image-based frequency or time domain information. ...
Article
Full-text available
We present a novel biometric authentication system enabled by ratiometric analysis of impedance of fingers. In comparison to the traditional biometrics that relies on acquired images of structural information of physiological characteristics, our biological impedance approach not only eliminates any practical means of making fake copies of the relevant physiological traits but also provides reliable features of biometrics using the ratiometric impedance of fingers. This study shows that the ratiometric features of the impedance of fingers in 10 different pairs using 5 electrodes at the fingertips can reduce the variation due to undesirable factors such as temperature and day-to-day physiological variations. By calculating the ratio of impedances, the difference between individual subjects was amplified and the spectral patterns were diversified. Overall, our ratiometric analysis of impedance improved the classification accuracy of 41 subjects and reduced the error rate of classification from 29.32% to 5.86% (by a factor of 5).
... That has able to achieve stable walking. & Li Yuan and Zhi chun Mu et al. [29] have presented a fuzzy vault based new chaff point method. Their method was used for both the face and ear. ...
Article
Full-text available
The multimodal biometrics is mainly used for the purpose of person certification and proof. Lot of biometrics is used for human authentication. In which ear and fingerprint are efficient one. There are three vital phases involved in the biometric detection which include the Preprocessing, Feature extraction and the classification. Initially, preprocessing is done with the help of median filter which lends a helping hand to the task of cropping the image for choosing the position. Then, from the preprocessed Finger print and ear image texture and shape features are extracted. In the long run, the extracted features are integrated. The integrated features, in turn, are proficiently classified by means of the optimal neural network (ONN). Here, the NN weights are optimally, selected with the help of firefly algorithm (FF). The biometric image is classified into fingerprint and ear if the identical person images are amassed in one group and the uneven images are stored in a different group. The performance of the proposed approach is analyzed in terms of evaluation metrics.
... A five point touch pattern based biometric authentication approach has also been proposed by a recent study (Blaica et al. 2013) which utilizes the touch patterns for authentication. The facial region covers ear shape, teeth and tongue print characteristics (Yuan and chun 2012;Zhang et al. 2010). The retina, iris and sclera vasculature are covered under ocular region (Bowyer et al. 2008). ...
Article
Full-text available
Authorized access to resources by legitimate users plays a crucial role in providing a secure and hassle-free user experience in the digital environments. Password remains the major authentication mechanism though there exist various drawbacks like leakage due to phishing and shoulder surfing, etc. This paper proposes two stronger transformations of the password termed as “PassContext” and “PassActions” which attempts to overcome the vulnerabilities in the plain-text password by harnessing the intricacies of human–computer interaction. The PassContext incorporates the hardware and software oriented context information along with the keyed-in password text during the verification process to provide improved authentication. The PassActions transforms the password from being text-only representation into a dynamic user interaction sequence which improves the strength of the password significantly. The proposed model incorporates methodologies to represent PassContext and PassActions for both validation and persistence purposes. The prototype implementations of PassContext and PassActions are evaluated with a suit of thirteen proposed measures, system usability survey (SUS) for usability analysis and with a well-established comparative framework.
... However, these features are not sufficient to establish the personal identity; this kind of evidence may always be corroborated with some other indications present at the scene of crime. Using computer forensics, different methods of ear identification have been developed which may be helpful in extracting and identifying the ear images from CCTV cameras and other surveillance systems (Emersic et al. 2017;Yuan and Chun Mu 2012;Kumar and Wu 2012;Kumar and Chan 2013). However, the modern system of identification using new computerized techniques such as automatic identity recognition and local information fusion by ear images is based upon some computerized algorithms; however, they must be compensated with the anthropological knowledge-based morphological variations. ...
Article
Full-text available
Background External human ear is considered to be a highly variable structure showing different morphological and individualistic features in different individuals and population groups. The uniqueness of the ear may be useful in establishing the identity of individuals by direct examination, during the examination of CCTV footage or analysis of the ear prints. Considering the forensic significance of the human ear and ear prints encountered at the scene of the crime, the present study is an attempt to evaluate various morphological characteristics of the ear in a north Indian population. Methodology The sample for the present study comprises of 90 males and 87 females aged between 18 and 30 years. All the study participants were from upper reaches of Himachal Pradesh in North India. The morphological characteristics such as overall shape of the ear, size and shape of the tragus, earlobe, shape of the helix, and forms of Darwin’s tubercle were studied in the participants. Results The oval-shaped ear was present among 40% of the males and 44.8% of the females in the study sample. The other types of the ear such as oblique, rectangular, round, and triangular were also found in both sexes. Bilateral asymmetry was observed in the shape of the ear. The shape of the tragus also varied with respect to the left and right sides as well as sexes. The earlobe showed different characteristics in different individuals. In nearly half of the cases in both males and females, the earlobe was found to be attached to the face; in many cases, it was free and in some partially attached. The size and shape of the earlobe also showed variations with respect to sides as well as sexes. The Darwin’s tubercle showed a variety of structures in both the left and right sides in both sexes. Conclusion The present study shows that the individualistic characteristics of the ear can provide very useful information for personal identification in forensic examinations. The shape of the ear and the important structures such as the tragus, helix, earlobe, and Darwin’s tubercle show a variety of structures and individuality. The importance and variability of the human ear may encourage the researchers in conducting further studies and solving the forensic cases pertaining to the investigation of CCTV footage and in examination of dead in airplane crashes, intentional mutilation and dismemberment, explosions, or other mass disasters.
... Ear biometrics is the new field of research. Different features of the ear can be used for biometric identification [9]. The terminology of human ear is presented in Figure 1. ...
Article
Full-text available
Biometrics are automated methods of recognising a person based on physiological or behavioural characteristics. To discriminate individuals, multimodal biometrics has already proven as an effective strategy. Biometric features can be broadly classified as physiological features and behavioural features. Ear, face, and palm come under physiological features. Gait and signature verification come under behavioural features. Combining multiple human trait features for biometric identification is multimodal biometric identification. Here, ear and palm print are the two biometric modalities used for person identification fused at feature level. To extract the features for person identification, Multiblock Local Binary Pattern and Binarised Statistical Image Features are used. Required intrusive means for acquiring the information can be a common drawback when using biometric features such as iris pattern, facial traits, etc. To overcome the drawbacks, ear can be used as a biometric feature; it also has an advantage of no changes over time and not influenced by facial expressions.
... This achieved 80% recognition rate. Yuan and Mu [8] approached an ear recognition system based on 2 dimensional images and achieved 94% accuracy for 0% occlusion and 85% accuracy for 35% occlusion. Gabor wavelets feature based recognition system is presented by Zhang et al. [9] to deal with partial occlusion and achieved a success rate of 96, 91 and 86% for occlusion of 15, 25 and 35% respectively. ...
Chapter
A preliminary study of ear based recognition system is reported here. The texture of external ear does not vary with age, obesity, disease, expression etc. unlike other common biometric parameters. So it is expected that ear has a good potential as a biometric parameter. In this study, geometrical features are extracted from ear image and compared considering an arbitrary image as new entry. The problem of occlusion due to jewelry is counteracted by a feature based empirical formulation explained the method section. Root Mean Square Error (RMSE) based comparison method is used for classification and 100% accuracy is achieved for a group of 10 subjects.
... As previously mentioned, inter-level fusion techniques are used in face, fingerprint, iris and signature recognition and can be applied in multibiometric ear recognition [14,15]. Intra-level fusion is primarily based on multi-classifier modalities and classical score level fusion techniques [24]. The majority of interand intra-level fusion studies have focussed on dual and triple weighted fusion. ...
Article
Full-text available
Although biometric ear recognition has recently gained a considerable degree of attention, it remains difficult to use currently available ear databases because most of them are constrained. Here, we introduce a novel architecture called ScoreNet for unconstrained ear recognition. The ScoreNet architecture combines a modality pool with a fusion learning approach based on deep cascade score-level fusion (DCSLF). Hand-crafted and deep learning methods can be used together under the ScoreNet architecture. The proposed method represents the first automated fusion learning (AutoFL) approach and is also compatible with parallel processing. We evaluated ScoreNet using the Unconstrained Ear Recognition Challenge Database (UERC), which is widely considered to be the most difficult database for evaluating ear recognition developed to date, and found that ScoreNet outperformed all other previously reported methods and achieved state-of-the-art accuracy.
... The performance of the fused SPIRAL and BSIF features are applied to different databases is presented in Tables (1-3). It shows that, compared to other feature extraction methods [7,8,9,14,15,16,17,23,25], the experimental method provides a superior performance, with an EER from 0 to 2.06. The performance of the method applied on the PolyU PPDB is presented in Table 1. ...
Chapter
A new palmprint recognition system is presented here. The method of extracting and evaluating textural feature vectors from palmprint images is tested on the PolyU database. Furthermore, this method is compared against other approaches described in the literature that are founded on binary pattern descriptors combined with spiral-feature extraction. This novel system of palmprint recognition was evaluated for its collision test performance, precision, recall, F-score and accuracy. The results indicate the method is sound and comparable to others already in use.
... Recently, Nanni and Lumini [22] adopted the sequential forward floating selection (SFFS) to select the best features from sub-windows in an ear image. Yuan and Mu [23] presented a brief review of ear recognition and proposed a fusion method for ear recognition based on local information. Most of known ear recognition methods adopt the approximate nearest neighbor (ANN) [19] or support vector machine (SVM) [24,25] as the classifier. ...
Article
Full-text available
Ear recognition task is known as predicting whether two ear images belong to the same person or not. In this paper, we present a novel metric learning method for ear recognition. This method is formulated as a pairwise constrained optimization problem. In each training cycle, this method selects the nearest similar and dissimilar neighbors of each sample to construct the pairwise constraints, and then solve the optimization problem by the iterated Bregman projections. Experiments are conducted on AMI, USTB II and WPUT databases. The results show that the proposed approach can achieve promising recognition rates in ear recognition, and its training process is much more efficient than the other competing metric learning methods.
... To capture the local patterns of optical ear images, Scale-Invariant Feature Transform (SIFT), which has witnessed a remarkable success in object recognition and image retrieval, was also used in [5]. Local feature extraction was also used in [6]. Although color information might not be a salient information to feature ear images, the work presented in [7] is an algorithm that explores such cue for ear recognition. ...
Article
Full-text available
Ear print is an imminent biometric modality that has been attracting increasing attention in the biometric community. However, compared to well-established modalities, such as face and fingerprints, a limited number of contributions has been offered on ear imaging. Moreover, only several studies address the aspect of ear characterization (i.e., feature design). In this respect, in this paper, we propose a novel descriptor for ear recognition. The proposed descriptor, namely, DLPQ, i.e., Dense Local Phase Quantization, is based on the phase responses, which is generated using the well-known Local Phase Quantization (LPQ) descriptor. Further, local dense histograms are extracted from the horizontal stripes of the phase maps followed by a pooling operation to address viewpoint changes and, finally, concatenated into an ear descriptor. Although the proposed DLPQ descriptor is built on the traditional LPQ, we particularly show that drastic improvements (of over 20%) are attained with respect to this latter descriptor on two benchmark datasets. Furthermore, the proposed descriptor stands out among recent ear descriptors from the literature.
... Researchers developed several approaches for ear recognition based on 2D ear images in the early years [2][3][4]. From those works, researchers found that the performances of 2D ear recognition methods were greatly affected by the pose variation and imaging condition. Compared with 2D ear images, the 3D ear data are relatively insensitive to illuminations and posture variation. ...
Article
Full-text available
Most existing ICP (Iterative Closet Point)-based 3D ear recognition approaches resort to the coarse-to-fine ICP algorithms to match 3D ear models. With such an approach, the gallery-probe pairs are coarsely aligned based on a few local feature points and then finely matched using the original ear point cloud. However, such an approach ignores the fact that not all the points in the coarsely segmented ear data make positive contributions to recognition. As such, the coarsely segmented ear data which contains a lot of redundant and noisy data could lead to a mismatch in the recognition scenario. Additionally, the fine ICP matching can easily trap in local minima without the constraint of local features. In this paper, an efficient and fully automatic 3D ear recognition system is proposed to address these issues. The system describes the 3D ear surface with a local feature—the Local Surface Variation (LSV), which is responsive to the concave and convex areas of the surface. Instead of being used to extract discrete key points, the LSV descriptor is utilized to eliminate redundancy flat non-ear data and get normalized and refined ear data. At the stage of recognition, only one-step modified iterative closest points using local surface variation (ICP-LSV) algorithm is proposed, which provides additional local feature information to the procedure of ear recognition to enhance both the matching accuracy and computational efficiency. On an Inter®Xeon®W3550, 3.07 GHz work station (DELL T3500, Beijing, China), the authors were able to extract features from a probe ear in 2.32 s match the ear with a gallery ear in 0.10 s using the method outlined in this paper. The proposed algorithm achieves rank-one recognition rate of 100% on the Chinese Academy of Sciences’ Institute of Automation 3D Face database (CASIA-3D FaceV1, CASIA, Beijing, China, 2004) and 98.55% with 2.3% equal error rate (EER) on the Collection J2 of University of Notre Dame Biometrics Database (UND-J2, University of Notre Dame, South Bend, IN, USA, between 2003 and 2005).
... The human ear is a new field of biometric research. Researchers have recently investigated the use of 2D [12,13] and 3D ear shape data [14][15][16]. For biometric recognition, diverse features can be presented in a 2D ear image. ...
Article
Combining multiple human trait features is a proven and effective strategy for biometric-based personal identification. In this paper, we investigate the fusion of two biometric modalities, i.e. ear and palmprint, at feature-level. Ear and palmprint patterns are characterized by a rich and stable structure, which provides a large amount of information to discriminate individuals. Local texture descriptors, namely Local Binary Patterns (LBP), Weber Local Descriptor (WLD), and Binarized Statistical Image Features (BSIF), were used to extract the discriminant features for robust human identification. Our extensive experimental analysis based on the benchmark IIT Delhi-2 ear and IIT Delhi palmprint databases confirmed that the proposed multimodal biometric system is able to increase recognition rates compared to that produced by single-modal biometrics, attaining a recognition rate of 100%.
Article
Biometric applications makes biometric authentication replace the traditional password in many cases. Biometric recognition technology has the advantages of convenience and high stability, facilitating identity recognition. However, the shortcoming of biometric authentication is easy to be stolen and leaked, which raises security concerns. In this paper, we design a lightweight joint biometric authentication scheme (SELBA) based on face and fingerprint. We improve searchable encryption (SE) to protect the privacy security of extracted biometric features in the storage and authentication stage. Because of the problem that biometric features cannot be changed or retrieved once leaked in existing schemes, we propose a cancelable mechanism to reconstruct stolen or damaged biometric templates. Moreover, we make a complete security analysis of the SELBA to meet the confidentiality, renewability, revocability, irreversibility and unlinkability of template in biometric recognition. Meanwhile, we conduct experiments on real data sets to show that SELBA is secure, efficient and easy to use in practical application scenarios.
Article
Ear recognition systems are one of the popular person identification systems. These biometric systems need to be protected against attackers. In this paper, a novel method is proposed to detect spoof attacks within ear recognition systems. The proposed method employs Convolutional Neural Network (CNN) which is based on deep learning and Image Quality Measure (IQM) techniques to detect printed photo attacks against ear recognition systems. Full-reference and no-reference image quality measures are used to extract ear image features. Score-level fusion is used to combine the scores obtained from image quality measures. Finally, decision-level fusion is employed to fuse the decisions obtained from CNN and IQM systems. The final decision is obtained as real or fake image as the output of the whole system. The experiments are conducted on publicly available ear datasets namely, AMI, UBEAR, IITD, USTB set 1 and USTB set 2 and the obtained results are compared with the state-of-the-art methods that are focused on printed photo attacks as well.
Chapter
This chapter introduces a multimodal technique that uses 2D and 3D ear images for secure access. The technique uses 2D-modality to identify keypoints and 3D-modality to describe keypoints. Upon detection and mapping of keypoints into 3D, a feature descriptor vector is computed around each mapped keypoint in 3D. We perform a two-stage, coarse and fine alignment to fit the 3D ear image of the probe to the 3D ear image of the gallery. The probe and gallery image keypoints are compared using feature vectors, where very similar keypoints are used as coarse alignment correspondence points. Once the ear pairs are matched fairly closely, the entire data is finely aligned to compute the matching score. A detailed systematic analysis using a large ear database has been carried out to show the efficiency of the technique proposed.
Article
Identification systems based on biometric features are becoming increasingly important. One of the most common biometric features is the ear. The accuracy of these systems is heavily dependent on the characteristics extracted from them. In this paper, an appropriate combination of local and global features in the frequency domain is extracted as unique features of the ear region. In the proposed approach, at first the image quality is improved by Contrast-limited Adaptive Histogram Equalization. Then, the global features of the ear region are extracted by applying the Gabor-Zernike operator to the whole image and its non-overlapping blocks. In addition, to extract of local features, the local phase quantization operator is used on the original image of the ear region. Then, the optimum combination of global and local features is selected using Genetic Algorithm. Finally, the nearest neighbor classifier with Canberra distance is used to identify users. The proposed approach is evaluated using three databases, i.e. USTB-1, IIT125 and IIT221. The recognition rate of 100%, 99.2% and 97.13%, is reported on these databases, respectively. The obtained results show that the proposed approach performs better than existing ear recognition methods.
Article
Mobile devices have brought a great convenience to us these years, which allow the users to enjoy the anytime and anywhere various applications such as the online shopping, Internet banking, navigation and mobile media. While the users enjoy the convenience and flexibility of the ”Go Mobile” trend, their sensitive private information (e.g., name and credit card number) on the mobile devices could be disclosed. An adversary could access the sensitive private information stored on the mobile device by unlocking the mobile devices. Moreover, the user’s mobile services and applications are all exposed to security threats. For example, the adversary could utilize the user’s mobile device to conduct non-permitted actions (e.g., making online transactions and installing malwares). The authentication on mobile devices plays a significant role to protect the user’s sensitive information on mobile devices and prevent any non-permitted access to the mobile devices. This paper surveys the existing authentication methods on mobile devices. In particular, based on the basic authentication metrics (i.e., knowledge, ownership and biometrics) used in existing mobile authentication methods, we categorize them into four categories, including the knowledge-based authentication (e.g., passwords and lock patterns), physiological biometric-based authentication (e.g., fingerprint and iris), behavioral biometrics-based authentication (e.g., gait and hand gesture), and two/multi-factor authentication. We compare the usability and security level of the existing authentication approaches among these categories. Moreover, we review the existing attacks to these authentication approaches to reveal their vulnerabilities. The paper points out that the trend of the authentication on mobile devices would be the multi-factor authentication, which determines the user’s identity using the integration (not the simple combination) of more than one authentication metrics. For example, the user’s behavior biometrics (e.g., keystroke dynamics) could be extracted simultaneously when he/she inputs the knowledge-based secrets (e.g., PIN), which can provide the enhanced authentication as well as sparing the user’s trouble to conduct multiple inputs for different authentication metrics.
Chapter
Cardiovascular disease screening is an effective means to effectively control the incidence of cardiovascular disease. The earlobe crease is an important marker for identifying cardiovascular diseases and can be used as an important sign for cardiovascular disease screening. Through the one-click uploading of human ear-based photos, the analysis of human ear crease based on image recognition is carried out, and the medical staff’s initial screening of cardiovascular disease assessment for some people is transformed into intelligent primary screening for cardiovascular disease in the city.
Article
As one of the most important biometrics, ear biometrics is getting more and more attention. Ear recognition has unique advantages and can make identification more secure and reliable together with other biometrics (e.g. face and fingerprint). Therefore, we investigate related information about ear recognition and classify the entire process of ear recognition, including detection, preprocessing, unimodal recognition including feature extraction and decision of classification or matching, and multimodal recognition based on inter-level and intra-level fusion. Unimodal and multimodal recognition are proposed comprehensively. In addition, inter-level and intra-level fusion are divided into different fusion ways. At the same time, we compare recognition results under the same dataset and analyze the difficulty of some datasets. In the end, challenges and outlook of ear recognition are also mentioned to expect to provide readers with some help about future directions and problems that should be overcome.
Article
Full-text available
Context: The topic is permitted from modern topics of interest for researchers to find logical solutions to problems of detection and recognition for ear identification. Therefore, we are looking for a solution to the problem of occlusion, detection and recognition of the person create an integrated system based on the latest research and to find new results in terms of accuracy and time and be comprehensive for everything. Objective: To survey researchers' efforts in response to the new and disruptive technology of ear identification systems, mapping the research landscape form the literature into a coherent taxonomy. Method: We use a systematic review as the basis for our work. a systematic review builds on 249 peer-reviewed studies, selected through a multi-stage process, from 1960 studies published between 2005 and 2017. Results: We develop a taxonomy that classifies the ear identification systems. The results of these articles are divided into three main categories, namely review and survey article, studies conducted on ear biometrics and development of ear biometric applications. Conclusion: The paper is, to our knowledge, the largest existing study on the topic of ear identification. This typically reflects the types of available systems. Researchers have expressed their concerns in the literature, and many suggested recommendations to resolve the existing and anticipated challenges, the list of which opens many opportunities for research in this field.
Chapter
This paper presents a deep learning approach for ear localization and recognition. The comparable complexity between human outer ear and face in terms of its uniqueness and permanence has increased interest in the use of ear as a biometric. But similar to face recognition, it poses challenges such as illumination, contrast, rotation, scale, and pose variation. Most of the techniques used for ear biometric authentication are based on traditional image processing techniques or handcrafted ensemble features. Owing to extensive work in the field of computer vision using convolutional neural networks (CNNs) and histogram of oriented gradients (HOG), the feasibility of deep neural networks (DNNs) in the field of ear biometrics has been explored in this research paper. A framework for ear localization and recognition is proposed that aims to reduce the pipeline for a biometric recognition system. The proposed framework uses HOG with support vector machines (SVMs) for ear localization and CNN for ear recognition. CNNs combine feature extraction and ear recognition tasks into one network with an aim to resolve issues such as variations in illumination, contrast, rotation, scale, and pose. The feasibility of the proposed technique has been evaluated on USTB III database. This work demonstrates 97.9% average recognition accuracy using CNNs without any image preprocessing, which shows that the proposed approach is promising in the field of biometric recognition.
Article
Biometrics are automated methods of recognising a person based on physiological or behavioural characteristics. To discriminate individuals, multimodal biometrics has already proven as an effective strategy. Biometric features can be broadly classified as physiological features and behavioural features. Ear, face, and palm come under physiological features. Gait and signature verification come under behavioural features. Combining multiple human trait features for biometric identification is multimodal biometric identification. Here, ear and palm print are the two biometric modalities used for person identification fused at feature level. To extract the features for person identification, Multiblock Local Binary Pattern and Binarised Statistical Image Features are used. Required intrusive means for acquiring the information can be a common drawback when using biometric features such as iris pattern, facial traits, etc. To overcome the drawbacks, ear can be used as a biometric feature; it also has an advantage of no changes over time and not influenced by facial expressions.
Article
This paper proposes a 2D ear recognition approach that is based on the fusion of ear and tragus using score-level fusion strategy. An attempt to overcome the effect of partial occlusion, pose variation and weak illumination challenges is done since the accuracy of ear recognition may be reduced if one or more of these challenges are available. In this study, the effect of the aforementioned challenges is estimated separately, and many samples of ear that are affected by two different challenges concurrently are also considered. The tragus is used as a biometric trait because it is often free from occlusion; it also provides discriminative features even in different poses and illuminations. The features are extracted using local binary patterns and the evaluation has been done on three datasets of USTB database. It has been observed that the fusion of ear and tragus can improve the recognition performance compared to the unimodal systems. Experimental results show that the proposed method enhances the recognition rates by fusion of parts that are nonoccluded with tragus in the cases of partial occlusion, pose variation and weak illumination. It is observed that the proposed method performs better than feature-level fusion methods and most of the state-of-the-art ear recognition systems.
Article
The capabilities of biometric systems have recently made extraordinary leaps by the emergence of deep learning. However, due to the lack of enough training data, the applications of the deep neural network in the ear recognition filed have run into the bottleneck. Moreover, the effect of fine-tuning from some pre-trained models is far less than expected due to the diversity among different tasks. Therefore, the authors propose a large-scale ear database and explore the robust convolutional neural network (CNN) architecture for the ear feature representation. The images in this USTB-Helloear database were taken under uncontrolled conditions with illumination, pose variation and different level of ear occlusions. Then they fine-tuned and modified some deep models on the proposed database through the ear verification experiments. First, they replaced the last pooling layers by spatial pyramid pooling layers to fit arbitrary data size and obtain multi-level features. In the training phase, the CNNs were trained both under the supervision of the softmax loss and centre loss to obtain more compact and discriminative features to identify unseen ears. Finally, three CNNs with different scales of ear images were assembled as the multi-scale ear representations for ear verification. The experimental results demonstrate the effectiveness of the proposed modified CNN deep model.
Article
Full-text available
An efficient scheme for human ear recognition is presented. This scheme comprises three main phases. First, the ear image is decomposed into a pyramid of progressively downgraded images, which allows the local patterns of the ear to be captured. Second, histograms of local features are extracted from each image in the pyramid and then concatenated to shape one single descriptor of the image. Third, the procedure is finalized by using decision making based on sparse coding. Experiments conducted on two datasets, composed of 125 and 221 subjects, respectively, have demonstrated the efficiency of the proposed strategy as compared to various existing methods. For instance, scores of 96.27% and 96.93% have been obtained for the datasets, respectively.
Chapter
The capabilities of biometric systems, such as face or fingerprint recognition systems, have recently made extraordinary leaps by the emergence of deep learning. However, due to the lack of enough training data, the applications of deep neural network in the ear recognition filed have run into the bottleneck. Therefore, the motivation of this paper is to present a new large database that contains more than 610,000 profile images from 1570 subjects. The main distinguishing feature of the images in this USTB-Helloear database is that they were taken under uncontrolled conditions with illumination and pose variation. In addition, all of individuals were required to not particularly care about ear occlusions. Therefore, 30% of subjects had the additional control groups with different level of ear occlusions. The ear images can be utilized to train a deep learning model of ear detection and recognition; moreover, the database, along with pair-matching tests, provides a benchmark to evaluate the performances of ear recognition and verification systems.
Article
Full-text available
Face recognition has attracted numerous research interests as a promising biometrics with many distinct advantages. However there are inevitable gaps lying between face recognition in lab condition and ubiquitous face recognition application in real word, which mainly caused by various illumination condition, random occlusion, lack of sample images and etc. To combat the influence of these impact factors, a novel dual features based sparse representation classification algorithm is proposed. It contains illumination robust feature based dictionary learning and fused sparse representation with dual features. Firstly, an enhanced center-symmetric local binary pattern (ECSLBP) derived from conducting center symmetric encoding on the fused component images is presented for dictionary construction. Then, sparse representation with dual features including both ECSLBP and CSLBP is conducted. The final recognition is derived from the fusion of both classification results according to a novel fusion scheme. Numerous experiments results on both Extended Yale B database and the AR database show that the proposed algorithm exhibits distinguished discriminative ability and state-of-the-art recognition rate compared with other existing algorithms, especially for single sample face recognition under random partial occlusion.
Article
The relatively stable structure of the human ear makes it suitable for identification. The significance of ear recognition in human authentication has become prominent in recent years. A number of ear recognition systems and methods have achieved good performance under limited conditions in the laboratory. In real-world applications, however, such as passport identification and law enforcement, where usually only one sample per person (OSPP) is registered in the gallery, most of the existing ear recognition methods are paralyzed by partial data (e.g., pose variations and occlusion). To address such problems, we propose a weighted multikeypoint descriptor sparse representation-based classification method to use local features of ear images. By adding adaptive weights to all the keypoints on a query image, the intraclass variations are reduced. Besides, the interclass variations of the gallery samples are enlarged by purifying the multikeypoint dictionary. Experiments are carried out on two benchmark databases, i.e., the Indian Institute of Technology Delhi ear database and the University of Science and Technology Beijing ear image database III, to demonstrate the feasibility and effectiveness of the proposed method in dealing with partial data problems in ear recognition under the premise of OSPP in the gallery. The proposed method has achieved state-of-the-art recognition performance especially when the ear images are affected by pose variations and random occlusion.
Article
Human ear recognition has been promoted as a profitable biometric over the past few years. With respect to other modalities, such as the face and iris, that have undergone a significant investigation in the literature, ear pattern is relatively still uncommon. We put forth a sparse coding-induced decision-making for ear recognition. It jointly involves the reconstruction residuals and the respective reconstruction coefficients pertaining to the input features (co-occurrence of adjacent local binary patterns) for a further fusion. We particularly show that combining both components (i.e., the residuals as well as the coefficients) yields better outcomes than the case when either of them is deemed singly. The proposed method has been evaluated on two benchmark datasets, namely IITD1 (125 subject) and IITD2 (221 subjects). The recognition rates of the suggested scheme amount for 99.5% and 98.95% for both datasets, respectively, which suggest that our method decently stands out against reference state-of-the-art methodologies. Furthermore, experiments conclude that the presented scheme manifests a promising robustness under large-scale occlusion scenarios
Article
Biometrics authentication has been corroborated to be an effective method for recognizing a person's identity with high confidence. In this field, the use of three-dimensional (3D) ear shape is a recent trend. As a biometric identifier, the ear has several inherent merits. However, although a great deal of efforts have been devoted, there is still large room for improvement in developing a highly effective and efficient 3D ear identification approach. In this paper, we attempt to fill this gap to some extent by proposing a novel 3D ear classification scheme that makes use of the label consistent K-SVD (LC-KSVD) framework. As an effective supervised dictionary learning algorithm, LC-KSVD learns a single compact discriminative dictionary for sparse coding and a multi-class linear classifier simultaneously. To use the LC-KSVD framework, one key issue is how to extract feature vectors from 3D ear scans. To this end, we propose a block-wise statistics-based feature extraction scheme. Specifically, we divide a 3D ear region of interest into uniform blocks and extract a histogram of surface types from each block; histograms from all blocks are then concatenated to form the desired feature vector. Feature vectors extracted in this way are highly discriminative and are robust to mere misalignment between samples. Experiments demonstrate that our approach can achieve better recognition accuracy than the other state-of-the-art methods. More importantly, its computational complexity is extremely low, making it quite suitable for the large-scale identification applications. MATLAB source codes are publicly online available at http://sse.tongji.edu.cn/linzhang/LCKSVDEar/LCKSVDEar.htm.
Article
A two-stage ear recognition framework is presented where two local descriptors and a sparse representation algorithm are combined. In a first stage, the algorithm proceeds by deducing a subset of the closest training neighbors to the test ear sample. The selection is based on the K-nearest neighbors classifier in the pattern of oriented edge magnitude feature space. In a second phase, the co-occurrence of adjacent local binary pattern features are extracted from the preselected subset and combined to form a dictionary. Afterward, sparse representation classifier is employed on the developed dictionary in order to infer the closest element to the test sample. Thus, by splitting up the ear image into a number of segments and applying the described recognition routine on each of them, the algorithm finalizes by attributing a final class label based on majority voting over the individual labels pointed out by each segment. Experimental results demonstrate the effectiveness as well as the robustness of the proposed scheme over leading state-of-the-art methods. Especially when the ear image is occluded, the proposed algorithm exhibits a great robustness and reaches the recognition performances outlined in the state of the art.
Article
Full-text available
We describe a novel approach for 3-D ear biometrics using video. A series of frames is extracted from a video clip and the region of interest in each frame is independently reconstructed in 3-D using shape from shading. The resulting 3-D models are then registered using the iterative closest point algorithm. We iteratively consider each model in the series as a reference model and calculate the similarity between the reference model and every model in the series using a similarity cost function. Cross validation is performed to assess the relative fidelity of each 3-D model. The model that demonstrates the greatest overall similarity is determined to be the most stable 3-D model and is subsequently enrolled in the database. Experiments are conducted using a gallery set of 402 video clips and a probe of 60 video clips. The results (95.0% rank-1 recognition rate and 3.3% equal error rate) indicate that the proposed approach can produce recognition rates comparable to systems that use 3-D range data. To the best of our knowledge, we are the first to develop a 3-D ear biometric system that obtains a 3-D ear structure from a video sequence.
Conference Paper
Full-text available
A class of biometrics based upon ear features is introduced for use in the development of passive identification systems. The viability of the proposed biometric is shown both theoretically in terms of the uniqueness and measurability over time of the ear, and in practice through the implementation of a computer vision based system. Each subject's ear is modeled as an adjacency graph built from the Voronoi diagram of its curve segments. We introduce a novel graph matching based algorithm for authentication which takes into account the erroneous curve segments which can occur due to changes (e.g., lighting, shadowing, and occlusion) in the ear image. This class of biometrics is ideal for passive identification because the features are robust and can be reliably extracted from a distance
Article
Full-text available
In this work, we propose a local approach for 2D ear authentication based on an ensemble of matchers trained on different color spaces. This is the first work that proposes to exploit the powerful properties of color analysis for improving the performance of an ear matcher.The method described is based on the selection of color spaces from which a set of Gabor features are extracted. The selection is performed using the sequential forward floating selection where the fitness function is related to the optimization of the ear recognition performance. Finally, the matching step is performed by means of the combination by the sum rule of several 1-nearest neighbor classifiers constructed on different color components.The effectiveness of the proposed method is demonstrated using the Notre-Dame EAR data set. Particularly interesting are the results obtained by the new approach in terms of rank-1 (∼84%), rank-5 (∼93%) and area under the ROC curve (∼98.5%), which are better than those obtained by other state-of-the-art 2D ear matchers.
Article
Full-text available
In this work we propose a local approach of 2D ear authentication. A multi-matcher system is proposed where each matcher is trained using features extracted from a single sub-window of the whole 2D image. The features are extracted by the convolution of each sub-window with a bank of Gabor Filters, then their dimensionality is reduced by Laplacian EigenMaps. The best matchers, corresponding to the most discriminative sub-windows, are selected by running the Sequential Forward Floating Selection (SFFS). Our experiments, carried out on a database of 114 people, show that combining only few (∼ten) sub-windows in the fusion step it is possible to achieve a very low Equal Error Rate.
Article
Full-text available
We describe how gait and ear biometrics could be deployed for use in forensic identification. Biometrics has advanced considerably in recent years, largely by increase in computational power. This has been accompanied by developments in, and proliferation of, surveillance technology. To prevent identification, subjects use evasion, disguise or concealment. The human gait is a candidate for identification since other mechanisms can be completely concealed and only the gait might be perceivable. The advantage of use a human ear is its permanence with increase in age. As such, not only are biometrics ripe for deployment for forensic use, but also ears and gait offer distinct advantages over other biometric modalities.
Article
Full-text available
Most existing work in 3D object recognition in computer vision has been on recognizing dissimilar objects using a small database. For rapid indexing and recognition of highly similar objects, this paper proposes a novel method which combines the feature embedding for the fast retrieval of surface descriptors, novel similarity measures for correspondence and a support vector machine (SVM)-based learning technique for ranking the hypotheses. The local surface patch (LSP) representation is used to find the correspondences between a model-test pair. Due to its high dimensionality, an embedding algorithm is used that maps the feature vectors to a low-dimensional space where distance relationships are preserved. By searching the nearest neighbors in low dimensions, the similarity between a model-test pair is computed using the novel features. The similarities for all model-test pairs are ranked using the learning algorithm to generate a short list of candidate models for verification. The verification is performed by aligning a model with the test object. The experimental results, on the UND dataset (302 subjects with 604 images) and the UCR dataset (155 subjects with 902 images) that contain 3D human ears, are presented and compared with the geometric hashing technique to demonstrate the efficiency and effectiveness of the proposed approach.
Article
Full-text available
Human ear is a new class of relatively stable biometrics that has drawn researchers' attention recently. In this paper, we propose a complete human recognition system using 3D ear biometrics. The system consists of 3D ear detection, 3D ear identification, and 3D ear verification. For ear detection, we propose a new approach which uses a single reference 3D ear shape model and locates the ear helix and the antihelix parts in registered 2D color and 3D range images. For ear identification and verification using range images, two new representations are proposed. These include the ear helix/antihelix representation obtained from the detection algorithm and the local surface patch (LSP) representation computed at feature points. A local surface descriptor is characterized by a centroid, a local surface type, and a 2D histogram. The 2D histogram shows the frequency of occurrence of shape index values versus the angles between the normal of reference feature point and that of its neighbors. Both shape representations are used to estimate the initial rigid transformation between a gallery-probe pair. This transformation is applied to selected locations of ears in the gallery set and a modified Iterative Closest Point (ICP) algorithm is used to iteratively refine the transformation to bring the gallery ear and probe ear into the best alignment in the sense of the least root mean square error. The experimental results on the UCR data set of 155 subjects with 902 images under pose variations and the University of Notre Dame data set of 302 subjects with time-lapse gallery-probe pairs are presented to compare and demonstrate the effectiveness of the proposed algorithms and the system.
Conference Paper
Full-text available
Two 3D ear recognition systems using structure from motion (SFM) and shape from shading (SFS) techniques, respectively, are explored. Segmentation of the ear region is performed using interpolation of ridges and ravines identified in each frame in a video sequence. For the SFM system, salient features are tracked across the video sequence and are reconstructed in 3D using a factorization method. Reconstructed points located within the valid ear region are stored as the ear model. The dataset used consists of video sequences for 48 subjects. Each test model is optimally aligned to the database models using a combination of geometric transformations which result in a minimal partial Hausdorff distance. For the SFS system, the ear structure is recovered by using reflectance and illumination properties of the scene. Shape matching is performed via iterative closest point. Based on our results, we conclude that both structure from motion and shape from shading are viable approaches for 3D ear recognition from video sequences.
Article
Recognizing human faces is one of the most important areas of research in biometrics. However, drastic change of poses is a big challenge for its practical application. This paper proposes recognizing human by their fused features of face and ear that extracted by independent component analysis. Other than conventional ICA (ICA1), which aims at searching independent basis images, we introduce a new method which aims at searching a set of independent feature coefficients to represent an image (ICA2). Experimental results show that multi-biometric fusion can availably conquer the effect of the pose variance. We also educe that ICA2 has a better result only when the database size is proper and the pose variance is not too sharp. Further more, we propose a weighed serial feature fusion strategy with the trained rule which can make the result even better.
Article
In many cases human identification biometrics systems are motivated by real-life criminal and forensic applications. Some methods, such as fingerprinting and face recognition, proved to be very efficient in computer vision based human recognition systems. In this paper we focus on novel methods of human identification motivated by the forensic and criminal practice. Our goal is to develop computer vision systems that would be used to identify humans on the basis of their lips, palm and ear images.
Article
Extraction and expression of features are critical to improve the recognition rate of ear image recognition. Scale invariant feature transform (SIFT) is a local point features extraction method. It can find those feature vectors in different scale spaces which are invariant for scale changes and rotations and flexible for illumination variations and affine transformations. SIFT is used to extract structural feature points of ear images and get stable feature descriptors. In order to overcome a defect of local descriptor that an image may have multiple similar regions, an auricle geometric feature is fused. Ear recognition based on these fusion vectors is carried out by using Euclid distance as similarity measurement. Experimental results show that the proposed method can effectively extract ear feature points and obtain high recognition ratio by using few feature points. It is robust to rigid transformation of ear image.
Article
As a new biometrics authentication technology, ear recognition remains many unresolved problems, one of them is the occlusion problem. This paper deals with ear recognition with partially occluded ear images. Firstly, the whole 2D image is separated to sub-windows. Then, Neighborhood Preserving Embedding is used for feature extraction on each subwindow, and we select the most discriminative sub-windows according to the recognition rate. Thirdly, a multi-matcher fusion approach is used for recognition with partially occluded images. Experiments on the USTB ear image database have illustrated that using only few sub-window can represent the most meaningful region of the ear, and the multimatcher model gets higher recognition rate than using the whole image for recognition.
Article
The mechanisms of the reaction of W and W+ with the N2O were investigated at the CCSD(T)/[SDD+6-311G(d)]//B3LYP/[SDD+6-31G(d)] level of theory. It was shown that the reaction of W(7S)+N2O(1Σ+) is a multi-state process, involves several lower-lying electronic states of numerous intermediates and transition states, and leads to oxidation, WO(3Σ)+N21Σg+ with a negligible barrier and/or nitration, WN(4Σ)+NO(2Π) with a barrier of 6.7kcal/mol relative to reactants. The reaction of W+ with N2O, resemble its neutral analog, proceeds via the insertion and direct abstraction pathway, leads to oxidation and nitration of the W-center.
Article
Abstract—As a biometric, ears have major advantage in that they appear to maintain their shape with increasing age. Current approaches have exploited both 2D and 3D images of the ear in human identification. Contending that the ear is mainly a planar shape we use 2D images, which are also consistent with deployment in surveillance and other planar image scenarios. Capitalizing on explicit structures, we propose a new parts-based model which has an advantage in handling noise and occlusion. Our model is learned via a stochastic clustering algorithm and a training set of ear images. In this, the candidates for the model parts are detected using the Scale Invariant Feature Transform (SIFT). We shall review different accounts of ear formation and consider some congenital ear anomalies which discuss apportioning various components to the ear’s complex structure, and illustrate that our parts-based approach is in accordance with this component-wise structure. In recognition, the ears are automatically enrolled and recognized from the parts selected via the model. The performance is evaluated on test sets selected from XM2VTS data. The model achieves promising results recognizing unoccluded ears and for occluded samples its performance is evaluated against PCA and a robust PCA. By results, both in modelling and recognition, our new model-based method does indeed appear to be a promising new approach to ear biometrics.
Chapter
3D ear reconstruction based on multi-view method is explored in this paper. Our approach do not depends on special facilities and it applies multiview epipolar geometry and motion analysis principles to reconstruct 3D ear model. Ear feature points selecting and matching based on ear contour division method in user interactive way are proposed in this paper. Detail experimental results and comparisons with existing reconstruction methods are provided. Potential 3D ear feature extraction and recognition ways are discussed in the final part.
Conference Paper
An improved non-negative matrix factorization with sparseness constraints (INMFSC) is proposed by imposing an additional constraint on the objective function of NMFSC, which can control the sparseness of both the basis vectors and the coefficient matrix simultaneously. The update rules to solve the objective function with constraints are presented. Research of ear recognition and its application is a new subject in the field of biometrics authentication. In practical application, ear is maybe partially occluded by hair etc. So the proposed INMFSC is applied on ear recognition with normal images and partially occluded images. Experiment results show that, compared with the traditional NMFSC, the proposed method not only obtains higher recognition rate, but also improves the sparseness and the orthogonality of coefficient matrix
Conference Paper
In this paper, we propose the ear detection approach under complex background which has two stages: off-line cascaded classifier training and on-line ear detection. In the stage of off-line training, considering the unique contour, the concave and convex of the ear, we apply the extended haar-like features to construct the space of the weak classifiers using the nearest neighbor norms. And then we choose the gentle AdaBoost algorithm to train the strong classifiers which form the cascaded multi-layer ear detector. In the stage of on-line detection, we apply two methods to speed up the detection procedure. The first one is to adjust the threshold of the strong classifiers to remain the like-ear sub windows for further processing only using the first two layer classifiers. The second one is to keep the size of the original image while scaling the detection sub-windows to locate the ear part. The ear detection experiments on USTB ear database, CAS-PEAL face database and CMU PIE database show that the proposed method is significantly efficient and robust.
Conference Paper
Although ear recognition has been researched widely, there still exist some problems to be resolved in depth such as multi-pose ear recognition which is rarely focused on. In this paper, different from previous methods used for ear recognition, one nonlinear algorithm, called locally linear embedding (LLE) belonging to manifold learning technique is introduced, and an improved locally linear embedding algorithm (IDLLE) is proposed considering the disadvantage of the standard LLE algorithm. Comparison to PCA and KPCA, experimental results demonstrate that applying LLE for multi-pose ear recognition can obtain better recognition results; and using the IDLLE can further improve the recognition performance as for multi-pose ear recognition, which greatly shows the validity of this improved algorithm.
Article
Ear recognition, as a biometric, has several advantages. In particular, ears can be measured remotely and are also relatively static in size and structure for each individual. Unfortunately, at present, good recognition rates require controlled conditions. For commercial use, these systems need to be much more robust. In particular, ears have to be recognized from different angles (poses), under different lighting conditions, and with different cameras. It must also be possible to distinguish ears from background clutter and identify them when partly occluded by hair, hats, or other objects. The purpose of this paper is to suggest how progress toward such robustness might be achieved through a technique that improves ear registration. The approach focuses on 2-D images, treating the ear as a planar surface that is registered to a gallery using a homography transform calculated from scale-invariant feature-transform feature matches. The feature matches reduce the gallery size and enable a precise ranking using a simple 2-D distance algorithm. Analysis on a range of data sets demonstrates the technique to be robust to background clutter, viewing angles up to ??13??, and up to 18% occlusion. In addition, recognition remains accurate with masked ear images as small as 20 ?? 35 pixels.
Article
Force field feature extraction for ear biometrics is discussed. The force field transformation treats the image as a array of mutually attracting particles that act as the source of a Gaussian force field. Underlying the force field there is a scalar potential energy field, which in the case of an ear takes the form of a smooth surface that resembles a small mountain with a number of peaks joined by ridges. The technique is validated by performing recognition on a ear database and by comparing the results with more established technique of principal component analysis.
Conference Paper
Application and research of ear recognition technology is a new subject in the field of biometrics recognition. Earlier research showed that human ear is one of the representative human biometrics with uniqueness and stability. Feasibility and characteristics of ear recognition was discussed and recent advances in 2D and 3D domain was presented. Furthermore, a proposal for future research topics was given, such as ear database generation, ear detection, ear occluding problem and multimodal biometrics with face etc.
Conference Paper
We describe a novel approach for 3D ear biometrics using video. A series of frames are extracted from a video clip and the region-of-interest (ROI) in each frame is independently reconstructed in 3D using Shape from Shading (SFS). The resulting 3D models are then registered using the Iterative Closest Point (ICP) algorithm. We iteratively consider each model in the series as a reference and calculate the similarity between the reference model and every model in the series using a similarity cost function. Cross validation is performed to assess the relative fidelity of each 3D model. The model that demonstrates the greatest overall similarity is determined to be the most stable 3D model and is subsequently enrolled in the database. Experiments are conducted using a gallery set of 402 video clips and a probe of 60 video clips. The results (95.0% rank-1 recognition rate) indicate that the proposed approach can produce recognition rates comparable to systems that use 3D range data. To the best of our knowledge, we are the first to develop a 3D ear biometric system that obtains 3D ear structure from a video sequence.
Conference Paper
In this paper we address for the first time, the problem of user identification using ear biometrics in the context of sparse representation. During the training session the compressed ear images are transformed to vectors to develop a dictionary matrix A [1]. The downsampled probe vector y is used to develop a linear, underdetermined system of equation y = Ax, x being unknown. The ill-posed system is regularized by utilizing the sparse nature of x and the inverse problem is solved through the l 1-norm minimization. Ideally the nonzero entries in the recovered vector x correspond to the class of the probe y. The developed system does not assume any preprocessing or normalization of the ear region. We did extensive experiments on the UND [2,3] and the FEUD [4] databases with session variability and incorporating different head rotations and lighting conditions. The proposed system is found to be robust under varying light and head rotations yielding a high recognition rate of the order of 98%. Moreover, in context of sparse representation a tuning parameter of the system is identified and is designated as an Operating Point (OP). The significance of the OP is highlighted by mathematical arguments and experimental verifications.
Conference Paper
In many cases human identification biometrics systems are motivated by real-life criminal and forensic applications. Some methods, such as fingerprinting and face recognition, proved to be very efficient in computer vision based human recognition systems. In this paper we focus on novel methods of human identification motivated by the forensic and criminal practice. Our goal is to develop computer vision systems that would be used to identify humans on the basis of their lips, palm and ear images.
Article
It is more than 10 years since the first tentative experiments in ear biometrics were conducted and it has now reached the “adolescence” of its development towards a mature biometric. Here we present a timely retrospective of the ensuing research since those early days. Whilst its detailed structure may not be as complex as the iris, we show that the ear has unique security advantages over other biometrics. It is most unusual, even unique, in that it supports not only visual and forensic recognition, but also acoustic recognition at the same time. This, together with its deep three-dimensional structure and its robust resistance to change with age will make it very difficult to counterfeit thus ensuring that the ear will occupy a special place in situations requiring a high degree of protection.
Article
The Forensic Ear Identification (FearID) research project was started in order to study the strength of evidence of earprints found on crime scenes. For this purpose, a sample of earprints from 1229 donors over three countries was collected. From each donor three left and three right earprints were gathered. On the one hand, operators denoted contours of the earprints to facilitate segmentation of the images, on the other anthropological specialists denoted anatomically specific locations. On the basis of this, methods for automated classification were developed and used for training of a system that classifies pairs of prints as 'matching' or 'non-matching'. Comparing lab quality prints, the system has an equal error rate of 4%. Starting from a reference database containing two prints per ear, hitlist behaviour is such that in 90% of all query searches the best hit is in the top 0.1% of the list. The results become less favourable (equal error rate of 9%) for print/mark comparisons.
Article
Previous works have shown that the ear is a promising candidate for biometric identification. However, in prior work, the preprocessing of ear images has had manual steps and algorithms have not necessarily handled problems caused by hair and earrings. We present a complete system for ear biometrics, including automated segmentation of the ear in a profile view image and 3D shape matching for recognition. We evaluated this system with the largest experimental study to date in ear biometrics, achieving a rank-one recognition rate of 97.8 percent for an identification scenario and an equal error rate of 1.2 percent for a verification scenario on a database of 415 subjects and 1,386 total probes.
Conference Paper
Recently there has been a lot of interest in geometrically motivated approaches to data analysis in high dimensional spaces. We consider the case where data is drawn from sampling a probability distribution that has support on or near a submanifold of Euclidean space. In this paper, we propose a novel subspace learning algorithm called neighborhood preserving embedding (NPE). Different from principal component analysis (PCA) which aims at preserving the global Euclidean structure, NPE aims at preserving the local neighborhood structure on the data manifold. Therefore, NPE is less sensitive to outliers than PCA. Also, comparing to the recently proposed manifold learning algorithms such as Isomap and locally linear embedding, NPE is defined everywhere, rather than only on the training data points. Furthermore, NPE may be conducted in the original space or in the reproducing kernel Hilbert space into which data points are mapped. This gives rise to kernel NPE. Several experiments on face database demonstrate the effectiveness of our algorithm.
Article
Researchers have suggested that the ear may have advantages over the face for biometric recognition. Our previous experiments with ear and face recognition, using the standard principal component analysis approach, showed lower recognition performance using ear images. We report results of similar experiments on larger data sets that are more rigorously controlled for relative quality of face and ear images. We find that recognition performance is not significantly different between the face and the ear, for example, 70.5 percent versus 71.6 percent, respectively, in one experiment. We also find that multimodal recognition using both the ear and face results in statistically significant improvement over either individual biometric, for example, 90.9 percent in the analogous experiment.
Score Level Fusion. Handbook of Multibiometrics
  • A A Ross
  • K Nandakumar
  • A K Jain
Ross, A.A., Nandakumar, K., Jain, A.K., 2006. Score Level Fusion. Handbook of Multibiometrics. Springer, USA, pp. 91–142.
Relationship between the number of sub-window and rank-1 recognition rate on: (a) USTB dataset3, (b) UND dataset
  • Fig
Fig. 12. Relationship between the number of sub-window and rank-1 recognition rate on: (a) USTB dataset3, (b) UND dataset. L. Yuan, Z.c. Mu / Pattern Recognition Letters 33 (2012) 182–190
Ear recognition with occlusion based on improved non-negative matrix factorization with sparseness constraint
  • L Yuan
  • Z C Mu
  • Y Zhang
Yuan, L., Mu, Z.C., Zhang, Y., 2006. Ear recognition with occlusion based on improved non-negative matrix factorization with sparseness constraint. In: Proc. 18th Internat. Conf. on Pattern Recognition, Hong Kong, vol. 4, pp. 501–504.
Ear identification. Forensic Identification Series
  • A Iannarelli
Iannarelli, A., 1989. Ear identification. Forensic Identification Series. Paramount Publishing Company, Fremont, California.