Fig 3 - uploaded by Hidangmayum Saxena Devi
Content may be subject to copyright.
(a)Sample for different poses of one subject contained in the Sheffield Face Multi-View Database used in our experiment(b)Sample for different poses of another subject contained in the Sheffield Face Multi-View Database used in our experiment
Source publication
This paper develops a new Face Recognition System which combines R-KDA for selecting optimal discriminant features with non-linear SVM for Recognition. Experiment results have been conducted showing the comparison of enhanced efficiency of our proposed system over R-KDA with k-nn as the similarity distance measure.
Context in source publication
Context 1
... files are all in PGM format, approximately 220 x 220 pixels in 256 shades of grey. Out of these total images, 10 different poses each of 18 different subjects are taken for our experimental purpose.Samples of two different persons are given in Fig.3(a)and(b).Six images of different poses are used for training and the rest for testing. ...
Similar publications
We generate natural language explanations for a fine-grained visual recognition task. Our explanations fulfill two criteria. First, explanations are class discriminative, meaning they mention attributes in an image which are important to identify a class. Second, explanations are image relevant, meaning they reflect the actual content of an image....
Quality differences between organic and conventional fresh tomatoes (unprocessed) and frozen tomatoes (processed) are evaluated by using a capillary rising picture method (capillary dynamolysis). The best pictures showing the differences most sharply between organic and conventional samples were prepared with 0.25-0.75% silver nitrate, 0.25-0.75% i...
This paper demonstrates that a simple modification of the variational autoencoder (VAE) formalism enables the method to identify and classify rotated and distorted digits. In particular, the conventional objective (cost) function employed during the training process of a VAE both quantifies the agreement between the input and output data records an...
Motor disturbances can affect the interaction with dynamic objects, such as catching a ball. A classification of clinical catching trials might give insight into the existence of pathological alterations in the relation of arm and ball movements. Accurate, but also early decisions are required to classify a catching attempt before the catcher's fir...
Cardiovascular disease (CVD) has become one of the most serious diseases that threaten human health. Over the past decades, over 150 million humans have died of CVDs. Hence, timely prediction of CVDs is especially important. Currently, deep learning algorithm-based CVD diagnosis methods are extensively employed, however, most such algorithms can on...
Citations
... To improve the face recognition process, the features with more differentiation power between individuals will be effective. Principal component analysis (PCA), linear discriminant analysis (LDA), and discrete cosine transform (DCT) are the most popular feature extraction Fig. 1 A general face recognition system (The block encircled by red box is a crucial step in the system) [11] methods for face recognition. PCA and LDA use training data to learn a subspace. ...
Face recognition domain has been well advanced and has achieved high accuracies in identification of individuals in recent years. But in practice, distinguishing similar faces such as an identical twin still is a great challenge for face recognition systems. It happens due to very small differences in the facial features of them. Therefore, extracting common face features is not proper for differentiating identical twins. A solution to this problem is to find the most distinctive regions in the face of identical twins. In this paper, two procedures used to find these specific regions: 1) Machine Processing: A Modified SIFT (M-SIFT) algorithm has been implemented on Identical twins’ face images. Each face image has been segmented into five regions contain eyes, eyebrows, nose, mouth, and face curve. The location and number of mismatched keypoints represented the most distinctive face region in the face of identical twins. 2) Crowdsourcing: We have recognized differences between identical twins faces from human criteria viewpoint by enlisting crowd intelligence. Several questionnaires were designed and completed by 120 participants. The dataset of this study collected by ourselves and include 650 images for 115 pairs of identical twins and 120 non-twin individuals. The results of Machine Processing and Crowdsourcing methods showed that the face curve is the most discriminant region among every five regions in most of identical twins. Several features proposed and extracted based on the keypoints of the M-SIFT algorithm and face landmarks. The experimental results demonstrated the lowest equal error rate of identical twins recognition as 7.8, 8.1 and 10.1% for using the whole images, only frontal images and only images with PAN motions, respectively.
... Facial expression recognition [1][2][3][4][5][6] and modeling techniques have broad academic and commercial values. The method of this paper is as follows: Firstly, the images in the video are pre-processed by locating the face and extracting the feature point information. ...
Facial modeling is a key step to model visual effects in special movie effects and computer games. In this paper, a method based on the combination of deep learning and feature extraction is proposed for the modeling of 3D face model. Firstly, the face region is located for the captured face image. And then, the facial feature points are extracted by the landmark algorithm and the Convolutional Neural Network (CNN) is used to classify the facial expressions. Next, a special expression 3D face model is created by the deformation of the standard 3D face model based on the facial expressions classification result. Finally, the 3D face model and the extracted facial feature points are combined to perform personalized adjustment of the 3D model to complete a 3D facial expression animation system. The experimental results show that the proposed method can effectively perform the dynamic 3D face modeling which has high reality.
... This work (Devi, Laishram, & Thounaojam, 2015), aims to develop a reconnaissance application by combining R-KDA with non-linear SVMs, and in turn the results are compared with the results of neighbouring R-KDA and K-nn respectively, achieving the best results for the former. The work proposes a two-phase recognition system: The feature extraction phase and the face recognition phase, for the former, uses R-KDA and non-linear SVM for the second. ...
This paper presents a literary review of facial recognition in 2D, which plays an important role in the life of the human being in terms of safety, work activity, etc. The focus is on the results obtained by some researchers with the application of feature extraction techniques, pattern classifiers, databases and their respective percentage of efficiency obtained. The objective is to determine efficient techniques that allow an optimal 2D facial recognition process, based on the quality of databases, feature extractors and pattern classifiers.
Social computing, a cross science of computational science and social science, is affecting people’s learning, work and life recently. Face recognition is going deep into every field of social life, and the feature extraction is particularly important. Linear Discriminant Analysis (LDA) is an effective feature extraction method. However, the traditional LDA cannot solve the nonlinear problem and small sample problem existing in high dimensional space. In this paper, the method of the Support Vector-based Direct Discriminant Analysis (SVDDA) is proposed. It incorporates SVM algorithm into LDA, extends SVM to nonlinear eigenspace, and optimizes eigenvalue to improve performance. Moreover, this paper combines SVDDA with the social computing theory. The experiments were tested on different face datasets. Compared with other existing methods, SVDDA has higher robustness and optimal performance.
Most existing user authentication approaches for detecting fraud in e-commerce applications have focused on Secure Sockets Layer (SSL)-based authentication to inspect a username and a password from a server, rather than the inspection of personal biometric information. Because of the lack of support for mutual authentication or two-way authentication between a consumer and a mercantile agent, one-way SSL authentication cannot prevent man-in-the-middle attacks. In practice, in user authentication systems, machine learning and the generalisation capability of support vector models (SVMs) are used to guarantee a small classification error. This study developed an online face-recognition system by training an SVM classifier based on user facial features associated with wavelet transforms and a spatially enhanced local binary pattern. A cross-validation scheme and SVMs associated with the Olivetti Research Laboratory database of user facial features were used for solving classification precision problems. Experimental results showed that the classification error decreased with an increase in the size of the training samples. By using the aggregation of both the low-resolution and the high-resolution face image samples, the global precision of face recognition was over 97% with tenfold cross-validation scheme for an image data size of 168 and 341, respectively. Overall, the proposed scheme provided a higher precision of face recognition compared with the average precision for low-resolution face image (approximately 89%) of the existing schemes.