Article

A novel virtual samples-based sparse representation method for face recognition

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A limited training set usually limits the performance of face recognition in practice. Even sparse representation-based methods which outperform in face recognition cannot avoid such situation. In order to effectively improve recognition accuracy of sparse representation-based methods on a limited training set, a novel virtual samples-based sparse representation (VSSR) method for face recognition is proposed in this paper. In the proposed method, virtual training samples are constructed to enrich the size and diversity of a training set and a sparse representation-based method is used to classify test samples. Extensive experiments on different face databases confirm that VSSR is robust to illumination variations and works better than many representative representation-based face recognition methods.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The virtual samples were created by adding small noise to the original training samples. Wang et al. [35] improved the method proposed in [27]. They divided an original image into several parts and generated a corresponding virtual training sample via adding different random noise to pixels of different parts. ...
... To evaluate the performance of the proposed method, a number of experiments are done on three benchmark face image databases: ORL, Yale, and AR. For showing the effectiveness of our proposed method, other similar methods proposed in [26,27,35] were implemented and experimented based on same face data. Moreover, as a preprocessing method before training, the getting virtual samples method proposed in this paper can be combined with many classification algorithms. ...
... So when matching a test face with unknown facial change, it seems that a more similar train sample can be found on the augmented samples. A lot of experiments show that the proposed method outperforms to some similar methods proposed in [26,27,35]. Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH ("Springer Nature"). ...
Article
Full-text available
Classifiers such as sparse representation or collaborative representation can get good performance in face recognition. But these methods require a number of train samples in each class to construct the dictionary. On the condition of undersampled train samples, their performance decreases dramatically. A novel method is proposed in this paper to address undersampled face recognition problem. Firstly, virtual face images are generated by principal component analysis and mirror transform. Secondly, the test sample is collaboratively represented by the augmented train samples and is recognized by classifier based on representation. A number of face recognition experiments on three benchmark face database show that the recognition accuracy of our method is greater than that of a similar method, while time efficiency of our method is competitive to similar method.
... According to the experimental results, this method achieves significant performance improvements in classification accuracy. Wang et al. (2014) proposed a novel virtual samples-based sparse representation (VSSR) method for face recognition. First, suppose that X is the original image matrix with size M × N . ...
... symmetrical face and CRC(Liu et al. 2015a, b) VSF_2DPCAVirtual symmetrical face and 2DPCA(Zhang et al. 2014) MI_RC Mirror image and the representation-based classification method(Xu et al. 2014a, b, c) MI_CIRCMirror image and the conventional and inverse representation (Xu et al. 2014a, b, c) ASF Approximately symmetrical face images (Xu et al. 2016) PTSFE Polynimial transformation and the subspace feature extraction method (Zhai et al. 2011) SIS Single image subspace approach (Liu et al. 2007) VSEM_2DPCA Virtual sample expansion method and 2DPCA (Shan 2013) SHC Simple hybrid classifier for face recognition with adaptively generated virtual data (Ryu and Oh 2002) BTSS_RC Data uncertainty in face recognition (Xu et al. 2014a, b, c)MRSRC Multiple representations and sparse representation for image classification(Xu et al. 2015a, b, c) QDA Quadratic discriminant analysis method based on virtual training samples(Wang and Yang 2008) VSSR Virtual samples-based sparse representation method for face recognition(Wang et al. 2014) ...
Article
Full-text available
Despite considerable advances made in face recognition in recent years, the recognition performance still suffers from insufficient training samples. Hence, various algorithms have been proposed for addressing the problems of small sample size with dramatic variations in illuminations, poses and facial expressions in face recognition. Among these algorithms, the virtual sample generation technology achieves promising performance with reasonable and effective mathematical function and easy implementation. In this paper, we systematically summarize the research progress in the virtual sample generation technology for face recognition and categorize the existing methods into three groups, namely, (1) construction of virtual face images based on the face structure; (2) construction of virtual face images based on the idea of perturbation and distribution function of samples; (3) construction of virtual face images based on the sample viewpoint. We carry out thorough and comprehensive comparative study in which different methods are compared by conducting an in-depth analysis on them. It demonstrates the significant advantage of combining the virtual sample generation technology with representation based methods.
... It seems that the mirror face images have better appearance than the symmetrical face images. We also see that the noised face images [16][17][18] are also useful representations of faces. Recent study also shows that even a simple transformation is able to attain accuracy improvement in image classification [19][20][21]. ...
Article
Full-text available
Description and classification of face images is a significant task of computer vision, machine learning and pattern recognition communities. In the past, researchers have made tremendous efforts in this task. Previous researchers always seek high-resolution face images for better image classification. However, with this paper, we present and demonstrate a new opinion that in some cases the use of alternative representations of facial images are very useful for face recognition and properly reducing the image resolution might be beneficial to better classification of face images. This may be attributed to the deformable property of faces and the fact that the proposed alternative representations can in some extent reduce the within-class difference of facial images. Also, the presented idea appear to be useful for helping people to improve face recognition techniques in real worlds.
... Face detection is a key technology in face information processing. It is also the preparation operation of face feature point location [1][2][3] , face comparison [4][5] , face recognition [6][7] , face super-resolution [8][9] and other related tasks. The quality of the detection will directly affect the accuracy of subsequent operations, and has important research significance. ...
Conference Paper
At present, the face detection model based on single convolutional neural network has the problem of the low accuracy of small-scale face detection when solving the problem of face detection at different scales. So, we propose an improved multi-scale face detection method based on SSD. The method adopts the feature-dense connection strategy to improve the network structure of the basic network in the SSD model, strengthening the information mobility between different convolutional layers and improving the feature description ability of the basic network. Then, the detection accuracy of small-scale faces is improved by introducing context information into shallow features. We evaluate our proposed architecture on WIDER FACE dataset, and it achieves a high average precision (AP) of 73.1%, 90% and 92% for different data sets ("difficult", "medium" and "simple") respectively, which is higher than several other methods.
... Next, these samples are added into each class of the training set. Efficient methods were proposed in Huang et al. (2003), Vetter (1998) and Wang et al. (2014) to generate virtual images in order to enlarge the training set. In this direction also, using virtual images mechanism, Zhang et al. (2005) proposed a new method based on SVD perturbation in order to address the SSPS problem. ...
Article
Full-text available
Face recognition is receiving a significant attention due to the need of facing important challenges when developing real applications under unconstrained environments. The three most important challenges are facial occlusion, the problem of dealing with a single sample per subject (SSPS) and facial expression. This paper describes and analyzes various strategies that have been developed recently for overcoming these three major challenges that seriously affect the performance of real face recognition systems. This survey is organized in three parts. In the first part, approaches to tackle the challenge of facial occlusion are classified, illustrated and compared. The second part briefly describes the SSPS problem and the associated solutions. In the third part, facial expression challenge is illustrated. In addition, pros and cons of each technique are stated. Finally, several improvements for future research are suggested, providing a useful perspective for addressing new research in face recognition.
Article
Video surveillance has attracted more and more interests in the last decade, video-based Face Recognition (FR) therefore became an important task. However, the surveillance videos include many vague non-frontal faces especially the view of faces looking down and up. As a result, most FR algorithms would perform worse when they were applied in surveillance videos. On the other hand, it was common at video monitoring field that only Single training Sample Per Person (SSPP) is available from their identification card. In order to effectively improve FR for both the SSPP problem and the low-quality problem, this paper proposed an approach to synthesis face images-based on 3D face modeling and blurring. In the proposed algorithm, firstly a 2D frontal face with high-resolution was used to build a 3D face model, then several virtual faces with different poses were synthesized from the 3D model, and finally some degraded face images were constructed from the original and the virtual faces through blurring process. At last multiple face images could be chosen from frontal, virtual and degraded faces to build a training set. Both SCface and LFW databases were employed to evaluate the proposed algorithm by using PCA, FLDA, scale invariant feature transform, compressive sensing and deep learning. The results on both datasets showed that the performance of these methods could be improved when virtual faces were generated to train the classifiers. Furthermore, in SCface database the average recognition rates increased up to 10%, 16.62%, 13.03%, 19.44% and 23.28% respectively for the above-mentioned methods when virtual view and blurred faces were taken to train their classifiers. Experimental results indicated that the proposed method for generating more train samples was effective and could be considered to be applied in intelligent video monitoring system.
Article
By representing the input testing image as a sparse linear combination of the training samples sparse representation based classification (SRC) has shown promising results for face recognition (FR). We consider the problem of low efiectiveness and eficiency of SRC algorithm when the face database is huge. Combining linear regression classification (LRC) with collaborative representation, we propose a coarse-to-fine FR algorithm, namely accelerated linear collaborative representation based classification (ALCRC) algorithm. The proposed algorithm contains two stages. In the first stage, we use LRC to coarsely select the candidate training set according to least residuals. Thus the search space of the training set is reduced. In the second stage, we employ collaborative representation based classification (CRC) to make the proposed algorithm robust to face occlusion and variations such as illuminations, expressions and poses. Experiments are done on the AR, ORL and FERET databases, and results demonstrate that the proposed algorithm has higher recognition rates and eficiency in comparison to SRC and CRC, and it is more robust than SRC and CRC. ©, 2014, Journal of Computational Information Systems. All right reserved.
Article
To extract salient features from images is significant for image classification. Deformable objects suffer from the problem that a number of pixels may have varying intensities. In other words, pixels at the same positions of training samples and test samples of an object usually have different intensities, which makes it difficult to obtain salient features of images of deformable objects. In this paper, we propose a novel method to address this issue. Our method first produces new representation of original images that can enhance pixels with moderate intensities of the original images and reduces the importance of other pixels. The new representation and original image of the object are complementary in representing the object, so the integration of them is able to improve the accuracy of image classification. The image classification experiments show that the simultaneous use of the proposed novel representations and original images can obtain a much higher accuracy than the use of only the original images. In particular, the incorporation of sparse representation with the proposed method can bring surprising improvement in accuracy. The maximum improvement in the accuracy may be greater than 8%. Moreover, The proposed non-parameter weighted fusion procedure is also attractive. The code of the proposed method is available at http://www.yongxu.org/lunwen.html.
Conference Paper
Full-text available
Recent research has shown the effectiveness of using sparse coding(Sc) to solve many computer vision problems. Motivated by the fact that kernel trick can capture the nonlinear similarity of features, which may reduce the feature quantization error and boost the sparse coding performance, we propose Kernel Sparse Representation(KSR). KSR is essentially the sparse coding technique in a high dimensional feature space mapped by implicit mapping function. We apply KSR to both image classification and face recognition. By incorporating KSR into Spatial Pyramid Matching(SPM), we propose KSRSPM for image classification. KSRSPM can further reduce the information loss in feature quantization step compared with Spatial Pyramid Matching using Sparse Coding(ScSPM). KSRSPM can be both regarded as the generalization of Efficient Match Kernel(EMK) and an extension of ScSPM. Compared with sparse coding, KSR can learn more discriminative sparse codes for face recognition. Extensive experimental results show that KSR outperforms sparse coding and EMK, and achieves state-of-the-art performance for image classification and face recognition on publicly available datasets.
Article
Full-text available
Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on nontraditional applications where the goal is not just to obtain a compact high-fidelity representation of the observed signal, but also to extract semantic information. The choice of dictionary plays a key role in bridging this gap: unconventional dictionaries consisting of, or learned from, the training samples themselves provide the key to obtaining state-of-the-art results and to attaching semantic meaning to sparse signal representations. Understanding the good performance of such unconventional dictionaries in turn demands new algorithmic and analytical techniques. This review paper highlights a few representative examples of how the interaction between sparse signal representation and computer vision can enrich both fields, and raises a number of open questions for further study.
Conference Paper
Full-text available
Compressive Sensing has become one of the standard methods offace recognition within the literature. We show, however, that the sparsity assumption which underpins much of this work is not supported by the data. This lack of sparsity in the data means that compressive sensing ap­ proach cannot be guaranteed to recover the exact signal, and therefore that sparse approximations may not deliver the robustness or performance desired. In this vein we show that a simple £2 approach to the face recognition problem is not only significantly more accurate than the state-of-the­ art approach, it is also more robust, and muchfaster. These results are demonstrated on the publicly available YaieB and AR face datasets but have implications for the appli­ cation of Compressive Sensing more broadly.
Conference Paper
Full-text available
As a recently proposed technique, sparse representation based classification (SRC) has been widely used for face recognition (FR). SRC first codes a testing sample as a sparse linear combination of all the training samples, and then classifies the testing sample by evaluating which class leads to the minimum representation error. While the importance of sparsity is much emphasized in SRC and many related works, the use of collaborative representation (CR) in SRC is ignored by most literature. However, is it really the l1-norm sparsity that improves the FR accuracy? This paper devotes to analyze the working mechanism of SRC, and indicates that it is the CR but not the l1-norm sparsity that makes SRC powerful for face classification. Consequently, we propose a very simple yet much more efficient face classification scheme, namely CR based classification with regularized least square (CRC_RLS). The extensive experiments clearly show that CRC_RLS has very competitive classification results, while it has significantly less complexity than SRC.
Article
In this paper, we propose a novel method for face recognition. Its basic idea is to use a coarse to fine strategy based on Scale Invariant Feature Transform (SIFT) feature. To recognize a test sample, our method contains the three main steps: The first step identifies a certain number of candidates from training samples, depending on the Euclidean distance between the test sample and all training samples. The second step counts the numbers of well matched pairs of SIFT features between each candidate chosen in the first step and the test sample, then chooses several samples from candidates with the greater number of well matched pairs. The third step calculates the similarity between the test sample and each class that includes training samples chosen in second step, and chooses a class with the highest similarity to be the recognition result. By the first two steps, our method succeeds in greatly reducing the computational complexity, and avoiding the interference of those samples that may cause error recognition to a certain extent. The third step enhances the robustness of our method. Extensive experiments on different public face databases confirm that our method obtains high recognition accuracy and has a good robustness.
Article
A new algorithm is proposed to deal with single training sample face recognition. After geometric normalization of human faces, we generate 13 virtual samples for each face by using geometric transformation and svd decomposition. The distribution of gray value of each image is processed to be normal standard distribution. Finally sparse representation is used to recognize faces. The result of experiments on ORL database shows that our algorithm is better than classical algorithms and new algorithms proposed recently.
Conference Paper
Generating a high resolution (HR) image from its corresponding low resolution (LR) counterpart is an important problem in many application fields. The recently widely used sparse representation (SR) techniques provide a pioneer work to this inverse problem by incorporating the sparsity prior into the super-resolution reconstruction process. Motivated by this work, in this paper, we present a new face image super-resolution method using the sparse representation, which first seeks a sparse representation for each low-resolution input, and then the representation coefficients are directly used to generate the corresponding high-resolution output. The effectiveness of the proposed method is evaluated through the experiments on the benchmark face database, and the experimental results demonstrate that the proposed method can achieve competitive performance compared with other state-of-the-art methods.
Article
Sparse representation method (SRM) is a state-of-the-art face recognition method. Nevertheless, SRM exploits image samples rather than image features to perform classification. As we know, the proper feature of the image can be more discriminative than the image sample itself. For example, Gabor and local binary pattern (LBP), two kinds of widely used features, have shown excellent discriminative performance in face recognition. Recently a number of experiments have shown that complete local binary pattern (CLBP) obtains a much better result than LBP in recognizing the texture images. With this paper, we propose a novel sparse representation method based on Gabor and CLBP features for face recognition. Our method first extracts the most discriminative features and then uses SRM to perform face recognition. The proposed method is composed of the following steps: the first step is to perform the histogram equalization operation on the image samples. The second step extracts the Gabor and CLBP features from the image samples. The last step uses the sparse representation method based on the combination of Gabor and CLBP features to perform classification. The rationales of our method are as follows: the first step can reduce the adverse effects caused by the variable illuminations. Both of the Gabor and CLBP features not only are very discriminative but also are complementary. A large number of experiments show the superior performance of our method. For the Feret face database, the rate of classification error of our method is 28.8% lower than that of SRM and 14.8% lower than that of LRC. For the ORL face database, the rate of classification error of our method is 9% lower than that of SRM and 9.5% lower than that of LRC.
Article
In this paper, we propose to use all the training samples in the original space or in the transform space to represent and classify test samples. It is shown that this method somewhat possesses some of the properties of sparseness. In other words, a large portion of the solution components have very small absolute values and only a few have large absolute values. Our analysis mathematically partially supports this claim of sparseness. We also explore other characteristics of the proposed method and compare the proposed sample representation method with transform methods that are based on conventional coordinate axes. The proposed method performs better than the state-of-the-art face recognition methods. Further, our method can be solved at a low computational cost. Its algorithm is simple and easy to understand, and its classification procedure is intuitive. The performance of our method is shown by a large number of face recognition experiments.
Article
In this paper, we present a collaborative representation-based classification on selected training samples (CRC_STS) for face image recognition. The CRC_STS uses a two stage scheme: The first stage is to select some most significant training samples from the original training set by using a multiple round of refining process. The second stage is to use collaborative representation classifier to perform classification on the selected training samples. Our method can be regarded as a sparse representation approach but without imposing l(1)-norm constraint on representation coefficients. The experimental results on three well known face databases show that our method works very well.
Article
Though sparse representation (Wagner et al. in IEEE Trans Pattern Anal Mach Intell 34(2):372–386, 2012, CVPR 597–604, 2009) can perform very well in face recognition (FR), it still can be improved. To improve the performance of FR, a novel sparse representation method based on virtual samples is proposed in this paper. The proposed method first extends the training samples to form a new training set by adding random noise to them and then performs FR. As the testing samples can be represented better with the new training set, the ultimate classification obtained using the proposed method is more accurate than the classification based on the original training samples. A number of FR experiments show that the classification accuracy obtained using our method is usually 2–5 % greater than that obtained using the method mentioned in Xu and Zhu (Neural Comput Appl, 2012).
Article
The traditional matrix-based feature extraction methods that have been widely used in face recognition essentially work on the facial image matrixes only in one or two directions. For example, 2DPCA can be seen as the row-based PCA and only reflects the information in each row, and some structure information cannot be uncovered by it. In this paper, we propose the directional 2DPCA that can extract features from the matrixes in any direction. To effectively use all the features extracted by the D2DPCA, we combine a bank of D2DPCA performed in different directions to develop a matching score level fusion method named multi-directional 2DPCA for face recognition. The results of experiments on AR and FERET datasets show that the proposed method can obtain a higher accuracy than the previous matrix-based feature extraction methods.
Article
In this paper, we propose a very simple and fast face recognition method and present its potential rationale. This method first selects only the nearest training sample, of the test sample, from every class and then expresses the test sample as a linear combination of all the selected training samples. Using the expression result, the proposed method can classify the testing sample with a high accuracy. The proposed method can classify more accurately than the nearest neighbor classification method (NNCM). The face recognition experiments show that the classification accuracy obtained using our method is usually 2–10% greater than that obtained using NNCM. Moreover, though the proposed method exploits only one training sample per class to perform classification, it might obtain a better performance than the nearest feature space method proposed in Chien and Wu (IEEE Trans Pattern Anal Machine Intell 24:1644–1649, 2002), which depends on all the training samples to classify the test sample. Our analysis shows that the proposed method achieves this by modifying the neighbor relationships between the test sample and training samples, determined by the Euclidean metric.
Article
In this paper, we propose a coarse to fine K nearest neighbor (KNN) classifier (CFKNNC). CFKNNC differs from the conventional KNN classifier (CKNNC) as follows: CFKNNC first coarsely determines a small number of training samples that are “close” to the test sample and then finely identifies the K nearest neighbors of the test sample. The main difference between CFKNNC and CKNNC is that they exploit the “representation-based distances” and Euclidean distances to determine the nearest neighbors of the test sample from the set of training samples, respectively. The analysis shows that the “representation-based distances” are able to take into account the dependent relationship between different training samples. Actually, the nearest neighbors determined by the proposed method are optimal from the point of view of representing the test sample. Moreover, the nearest neighbors obtained using our method contain less redundant information than those obtained using CKNNC. The experimental results show that CFKNNC can classify much more accurately than CKNNC and various improvements to CKNNC such as the nearest feature line (NFL) classifier, the nearest feature space (NFS) classifier, nearest neighbor line classifier (NNLC) and center-based nearest neighbor classifier (CBNNC).
Article
A limited number of available training samples have become one bottleneck of face recognition. In real-world applications, the face image might have various changes owing to varying illumination, facial expression and poses. However, non-sufficient training samples cannot comprehensively convey these possible changes, so it is hard to improve the accuracy of face recognition. In this paper, we propose to exploit the symmetry of the face to generate new samples and devise a representation based method to perform face recognition. The new training samples really reflect some possible appearance of the face. The devised representation based method simultaneously uses the original and new training samples to perform a two-step classification, which ultimately uses a small number of classes that are ‘near’ to the test sample to represent and classify it and has a similar advantage as the sparse representation method. This method also takes advantages of the score level fusion, which has proven to be very competent and usually performs better than the decision level and feature level fusion. The experimental results show that the proposed method outperforms state-of-the-art face recognition methods including the sparse representation classification (SRC), linear regression classification (LRC), collaborative representation (CR) and two-phase test sample sparse representation (TPTSSR).
Article
A one-step synthesis of heterobifunctional hyperbranched polyethylenes covalently tethered with dual acryloyl and 2-bromoisobutyryl functionalities at controllable contents is reported. It is achieved in one pot by chain-walking terpolymerization of ethylene with two functional acrylate comonomers, 1,6-hexanediol diacrylate and 2-(2-bromoisobutyryloxy) ethyl acrylate. The unique chain-walking mechanism renders the hyperbranched polymer chain topology and its remarkable capability in incorporating acrylate comonomers enables the incorporation of both comonomers to give two valuable functionalities. The two comonomers exhibit nearly equal vinyl reactivity and are incorporated independently in the terpolymerization with their molar contents controlled by adjusting the comonomer feed concentrations.(Figure Presented)
Conference Paper
In many problems in computer vision, data in multiple classes lie in multiple low-dimensional subspaces of a high-dimensional ambient space. However, most of the existing classification methods do not explicitly take this structure into account. In this paper, we consider the problem of classification in the multi-sub space setting using sparse representation techniques. We exploit the fact that the dictionary of all the training data has a block structure where the training data in each class form few blocks of the dictionary. We cast the classification as a structured sparse recovery problem where our goal is to find a representation of a test example that uses the minimum number of blocks from the dictionary. We formulate this problem using two different classes of non-convex optimization programs. We propose convex relaxations for these two non-convex programs and study conditions under which the relaxations are equivalent to the original problems. In addition, we show that the proposed optimization programs can be modified properly to also deal with corrupted data. To evaluate the proposed algorithms, we consider the problem of automatic face recognition. We show that casting the face recognition problem as a structured sparse recovery problem can improve the results of the state-of-the-art face recognition algorithms, especially when we have relatively small number of training data for each class. In particular, we show that the new class of convex programs can improve the state-of-the-art face recognition results by 10% with only 25% of the training data. In addition, we show that the algorithms are robust to occlusion, corruption, and disguise.
Conference Paper
In this paper we address for the first time, the problem of video-based face recognition in the context of sparse representation classification (SRC). The SRC classification using still face images, has recently emerged as a new paradigm in the research of view-based face recognition. In this research we extend the SRC algorithm for the problem of temporal face recognition. Extensive identification and verification experiments were conducted using the VidTIMIT database [1,2]. Comparative analysis with state-of-the-art Scale Invariant Feature Transform (SIFT) based recognition was also performed. The SRC algorithm achieved 94.45% recognition accuracy which was found comparable to 93.83% results for the SIFT based approach. Verification experiments yielded 1.30% Equal Error Rate (EER) for the SRC which outperformed the SIFT approach by a margin of 0.5%. Finally the two classifiers were fused using the weighted sum rule. The fusion results consistently outperformed the individual experts for identification, verification and rank-profile evaluation protocols.
Conference Paper
By coding the input testing image as a sparse linear combination of the training samples via l 1-norm minimization, sparse representation based classification (SRC) has been recently successfully used for face recognition (FR). Particularly, by introducing an identity occlusion dictionary to sparsely code the occluded portions in face images, SRC can lead to robust FR results against occlusion. However, the large amount of atoms in the occlusion dictionary makes the sparse coding computationally very expensive. In this paper, the image Gabor-features are used for SRC. The use of Gabor kernels makes the occlusion dictionary compressible, and a Gabor occlusion dictionary computing algorithm is then presented. The number of atoms is significantly reduced in the computed Gabor occlusion dictionary, which greatly reduces the computational cost in coding the occluded face images while improving greatly the SRC accuracy. Experiments on representative face databases with variations of lighting, expression, pose and occlusion demonstrated the effectiveness of the proposed Gabor-feature based SRC (GSRC) scheme.
Conference Paper
In this paper, motivated by the recent development of sparse representation (SR) and compressive sensing (CS), in order to address one sample problem, we propose two approaches: shifted images +SRC (SSRC) and reconstructed images +SRC (RSRC). Specifically, we generate the multiple images by shifting the original image or reconstructing the original image via PCA(Principle Component Analysis), and regard new images as training samples, and then apply SRC (Sparse Representation-based Classification) on new training samples set. The experimental results on the two popular face databases (ORL and Yale) demonstrate the feasibility and effectiveness of our proposed methods.
Conference Paper
This paper presents a novel statistical model to help exploring the FPGA interconnect architecture design space efficiently. A series of parameters featuring the GRM- based (General Routing Matrix) interconnect architecture are defined. By analyzing these parameters such as routing segments type, channel width and drive relation, our model is able to calculate the average hops indicator which is approved in experiments to be a good estimation of the timing performance of architectures. With our model, we evaluate hundreds of architectures and figure out the formula expressing the possible trade-offs between performance and area. We select several representatives and conclude that routing segment is still a powerful tool in design trade-offs. Keywords-hops;model;interconnect;FPGA;
Article
In this paper, we propose a two-phase test sample representation method for face recognition. The first phase of the proposed method seeks to represent the test sample as a linear combination of all the training samples and exploits the representation ability of each training sample to determine M "nearest neighbors" for the test sample. The second phase represents the test sample as a linear combination of the determined M nearest neighbors and uses the representation result to perform classification. We propose this method with the following assumption: the test sample and its some neighbors are probably from the same class. Thus, we use the first phase to detect the training samples that are far from the test sample and assume that these samples have no effects on the ultimate classification decision. This is helpful to accurately classify the test sample. We will also show the probability explanation of the proposed method. A number of face recognition experiments show that our method performs very well. Index Terms—Computer vision, face recognition, pattern recognition, sparse representation, transform methods.
Article
By using sparse representation and compressed sensing, researchers have been able to demonstrate significant improvements in accuracy over traditional face-recognition techniques.
Article
Since Karel Capek first used the word "robot" in print in a 1920 play, a vast array of autonomous electro-mechanical systems have emerged from research labs, making their way onto production lines for industrial tasks, into toy stores for entertainment, and even into homes to perform simple household jobs. While the bulk of robotics research strives to make robots more useful and more capable of even greater levels of autonomy, several labs are attempting to make robotic systems much smaller. One of the most active areas of such research is medical nanorobotics, an emerging field positioned at the intersection of several sciences. As a discipline, medical nanorobotics remain young for now, but many scientists are already demonstrating new developments they say will form the foundations for the next major breakthroughs in this area. While some research in this field remains theoretical and might never directly lead to real-world applications, several nanorobotics labs focus specifically on projects that might have near-term practical applications.
Article
Previous work has demonstrated that the image variation of many objects (human faces in particular) under variable lighting can be effectively modeled by low-dimensional linear spaces, even when there are multiple light sources and shadowing. Basis images spanning this space are usually obtained in one of three ways: A large set of images of the object under different lighting conditions is acquired, and principal component analysis (PCA) is used to estimate a subspace. Alternatively, synthetic images are rendered from a 3D model (perhaps reconstructed from images) under point sources and, again, PCA is used to estimate a subspace. Finally, images rendered from a 3D model under diffuse lighting based on spherical harmonics are directly used as basis images. In this paper, we show how to arrange physical lighting so that the acquired images of each object can be directly used as the basis vectors of a low-dimensional linear space and that this subspace is close to those acquired by the other methods. More specifically, there exist configurations of k point light source directions, with k typically ranging from 5 to 9, such that, by taking k images of an object under these single sources, the resulting subspace is an effective representation for recognition under a wide range of lighting conditions. Since the subspace is generated directly from real images, potentially complex and/or brittle intermediate steps such as 3D reconstruction can be completely avoided; nor is it necessary to acquire large numbers of training images or to physically construct complex diffuse (harmonic) light fields. We validate the use of subspaces constructed in this fashion within the context of face recognition.