Ran He

Northeast Institute of Geography and Agroecology, Beijing, Beijing Shi, China

Are you Ran He?

Claim your profile

Publications (41)17.68 Total impact

  • Source
  • [show abstract] [hide abstract]
    ABSTRACT: Robust sparse representation has shown significant potential in solving challenging problems in computer vision such as biometrics and visual surveillance. Although several robust sparse models have been proposed and promising results have been obtained, they are either for error correction or for error detection, and learning a general framework that systematically unifies these two aspects and explores their relation is still an open problem. In this paper, we develop a half-quadratic (HQ) framework to solve the robust sparse representation problem. By defining different kinds of half-quadratic functions, the proposed HQ framework is applicable to performing both error correction and error detection. More specifically, by using the additive form of HQ, we propose an $(\ell_1)$-regularized error correction method by iteratively recovering corrupted data from errors incurred by noises and outliers; by using the multiplicative form of HQ, we propose an $(\ell_1)$-regularized error detection method by learning from uncorrupted data iteratively. We also show that the $(\ell_1)$-regularization solved by soft-thresholding function has a dual relationship to Huber M-estimator, which theoretically guarantees the performance of robust sparse representation in terms of M-estimation. Experiments on robust face recognition under severe occlusion and corruption validate our framework and findings.
    IEEE Transactions on Software Engineering 02/2014; 36(2):261-75. · 2.59 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Great progress has been achieved in face recognition in the last three decades. However, it is still challenging to characterize the identity related features in face images. This paper proposes a novel facial feature extraction method named Gabor Ordinal Measures (GOM), which integrates the distinctiveness of Gabor features and the robustness of ordinal measures as a promising solution to jointly handle inter-person similarity and intra-person variations in face images. In the proposal, different kinds of ordinal measures are derived from magnitude, phase, real and imaginary components of Gabor images, respectively, and then are jointly encoded as visual primitives in local regions. The statistical distributions of these visual primitives in face image blocks are concatenated into a feature vector and linear discriminant analysis is further used to obtain a compact and discriminative feature representation. Finally, a two-stage cascade learning method and a greedy block selection method are used to train a strong classifier for face recognition. Extensive experiments on publicly available face image databases such as FERET, AR and large scale FRGC v2.0 demonstrate state-of-the-art face recognition performance of GOM.
    IEEE Transactions on Information Forensics and Security 11/2013; PP(99). · 1.90 Impact Factor
  • [show abstract] [hide abstract]
    ABSTRACT: Low-rank matrix recovery algorithms aim to recover a corrupted low-rank matrix with sparse errors. However, corrupted errors may not be sparse in real-world problems and the relationship between L1 regularizer on noise and robust M-estimators is still unknown. This paper proposes a general robust framework for low-rank matrix recovery via implicit regularizers of robust M-estimators, which are derived from convex conjugacy and can be used to model arbitrarily corrupted errors. Based on the additive form of half-quadratic optimization, proximity operators of implicit regularizers are developed such that both low-rank structure and corrupted errors can be alternately recovered. In particular, the dual relationship between the absolute function in L1 regularizer and Huber M-estimator is studied, which establishes a relationship between robust low-rank matrix recovery methods and M-estimators based robust principal component analysis methods. Extensive experiments on synthetic and real-world datasets corroborate our claims and verify the robustness of the proposed framework.
    IEEE Transactions on Software Engineering 09/2013; · 2.59 Impact Factor
  • [show abstract] [hide abstract]
    ABSTRACT: This paper proposes a novel nonnegative sparse representation approach, called two-stage sparse representation (TSR), for robust face recognition on a large-scale database. Based on the divide and conquer strategy, TSR decompos 1f0 es the procedure of robust face recognition into outlier detection stage and recognition stage. In the first stage, we propose a general multisubspace framework to learn a robust metric in which noise and outliers in image pixels are detected. Potential loss functions, including L1 , L2,1, and correntropy are studied. In the second stage, based on the learned metric and collaborative representation, we propose an efficient nonnegative sparse representation 64c algorithm to find an approximation solution of sparse representation. According to the L1 ball theory in sparse representation, the approximated solution is unique and can be optimized efficiently. Then a filtering strategy is developed to avoid the computation of the sparse representation on the whole large-scale dataset. Moreover, theoretical analysis also gives the necessary condition for nonnegative least squares technique to find a sparse solution. Extensive experiments on several public databases have demonstrated that the proposed TSR approach, in general, achieves better classification accuracy than the state-of-the-art sparse representation methods. More importantly, a significant reduction of computational costs is reached in comparison with sparse representation classifier; this enables the TSR to be more suitable for robust face recognition on a large-scale dataset.
    Neural Networks and Learning Systems, IEEE Transactions on. 01/2013; 24(1):35-46.
  • [show abstract] [hide abstract]
    ABSTRACT: Non-negativity matrix factorization (NMF) and its variants have been explored in the last decade and are still attractive due to its ability of extracting non-negative basis images. However, most existing NMF based methods are not ready for encoding higher-order data information. One reason is that they do not directly/explicitly model structured data information during learning, and therefore the extracted basis images may not completely describe the “parts” in an image [1] very well. In order to solve this problem, the structured sparse NMF has been recently proposed in order to learn structured basis images. It however depends on some special prior knowledge, i.e. one needs to exhaustively define a set of structured patterns in advance. In this paper, we wish to perform structured sparsity learning as automatically as possible. To that end, we propose a pixel dispersion penalty (PDP), which effectively describes the spatial dispersion of pixels in an image without using any manually predefined structured patterns as constraints. In PDP, we consider each part-based feature pattern of an image as a cluster of non-zero pixels; that is the non-zero pixels of a local pattern should be spatially close to each other. Furthermore, by incorporating the proposed PDP, we develop a spatial non-negative matrix factorization (Spatial NMF) and a spatial non-negative component analysis (Spatial NCA). In Spatial NCA, the non-negativity constraint is only imposed on basis images and such constraint on coefficients is released, so both subtractive and additive combinations of non-negative basis images are allowed for reconstructing any images. Extensive experiments are conducted to validate the effectiveness of the proposed pixel dispersion penalty. We also experimentally show that Spatial NCA is more flexible for extracting non-negative basis images and obtains better and more stable performance.
    Pattern Recognition. 08/2012; 45(8):2912–2926.
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: This work presents a systematic study of objective evaluations of abstaining classifications using information-theoretic measures (ITMs). First, we define objective measures as the ones which do not depend on any free parameter. According to this definition, technical simplicity for examining “objectivity” or “subjectivity” is directly provided for classification evaluations. Second, we propose 24 normalized ITMs for investigation, which are derived from either mutual information, divergence, or cross-entropy. Contrary to conventional performance measures that apply empirical formulas based on users′ intuitions or preferences, the ITMs are theoretically more general for realizing objective evaluations of classifications. They are able to distinguish “error types” and “reject types” in binary classifications without the need to inputting data of cost terms. Third, to better understand and select the ITMs, we suggest three desirable features for classification assessment measures, which appear more crucial and appealing from the viewpoint of classification applications. Using these features as “meta-measures”, we can reveal the advantages and limitations of ITMs from a higher level of evaluation knowledge. Numerical examples are given to demonstrate our claims and compare the differences among the proposed measures. The best measure is selected in terms of the meta-measures, and its specific properties regarding error types and reject types are analytically derived.
  • [show abstract] [hide abstract]
    ABSTRACT: This paper proposes a new image representation method named Histograms of Gabor Ordinal Measures (HOGOM) for robust face recognition. First, a novel texture descriptor, Gabor Ordinal Measures (GOM), is developed to inherit the advantages from Gabor features and Ordinal Measures. GOM applies Gabor filters of different orientations and scales on the face image and then computes Ordinal Measures over each Gabor magnitude response. Second, in order to obtain an effective and compact representation, the binary values of each GOM, for different orientations at a given scale, are encoded into a single decimal number and then spatial histograms of non-overlapping rectangular regions are computed. Finally, a nearest-neighbor classifier with the χ2 dissimilarity measure is used for classification. HOGOM has three principal advantages: 1) it succeeds the spatial locality and orientation selectivity from Gabor features; 2) the adopted region-comparison strategy makes it more robust; 3) by applying the binary codification and computing spatial histograms, it becomes more stable and efficient. Extensive experiments on the large-scale FERET database and AR database show the robustness of the proposed descriptor, achieving the state of the art.
    International Conference on Biometrics; 03/2012
  • [show abstract] [hide abstract]
    ABSTRACT: Mean-Shift (MS) is a powerful nonparametric clustering method. Although good accuracy can be achieved, its computational cost is particularly expensive even on moderate data sets. In this paper, for the purpose of algorithmic speedup, we develop an agglomerative MS clustering method along with its performance analysis. Our method, namely Agglo-MS, is built upon an iterative query set compression mechanism which is motivated by the quadratic bounding optimization nature of MS algorithm. The whole framework can be efficiently implemented in linear running time complexity. We then extend Agglo-MS into an incremental version which performs comparably to its batch counterpart. The efficiency and accuracy of Agglo-MS are demonstrated by extensive comparing experiments on synthetic and real data sets.
    IEEE Transactions on Knowledge and Data Engineering 03/2012; · 1.89 Impact Factor
  • [show abstract] [hide abstract]
    ABSTRACT: In this paper, we study the problem of robust feature extraction based on l2,1 regularized correntropy in both theoretical and algorithmic manner. In theoretical part, we point out that an l2,1-norm minimization can be justified from the viewpoint of half-quadratic (HQ) optimization, which facilitates convergence study and algorithmic development. In particular, a general formulation is accordingly proposed to unify l1-norm and l2,1-norm minimization within a common framework. In algorithmic part, we propose an l2,1 regularized correntropy algorithm to extract informative features meanwhile to remove outliers from training data. A new alternate minimization algorithm is also developed to optimize the non-convex correntropy objective. In terms of face recognition, we apply the proposed method to obtain an appearance-based model, called Sparse-Fisherfaces. Extensive experiments show that our method can select robust and sparse features, and outperforms several state-of-the-art subspace methods on largescale and open face recognition datasets.
    Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on; 01/2012
  • [show abstract] [hide abstract]
    ABSTRACT: Fisher’s Linear Discriminant Analysis (LDA) has been recognized as a powerful technique for face recognition. However, it could be stranded in the non-Gaussian case. Nonparametric discriminant analysis (NDA) is a typical algorithm that extends LDA from Gaussian case to non-Gaussian case. However, NDA suffers from outliers and unbalance problems, which cause a biased estimation of the extra-class scatter information. To address these two problems, we propose a robust large margin discriminant tangent analysis method. A tangent subspace-based algorithm is first proposed to learn a subspace from a set of intra-class and extra-class samples which are distributed in a balanced way on the local manifold patch near each sample point, so that samples from the same class are clustered as close as possible and samples from different classes will be separated far away from the tangent center. Then each subspace is aligned to a global coordinate by tangent alignment. Finally, an outlier detection technique is further proposed to learn a more accurate decision boundary. Extensive experiments on challenging face recognition data set demonstrate the effectiveness and efficiency of the proposed method for face recognition. Compared to other nonparametric methods, the proposed one is more robust to outliers.
    Neural Computing and Applications 01/2012; 21:269-279. · 1.17 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: This letter proposes a new multiple linear regression model using regularized correntropy for robust pattern recognition. First, we motivate the use of correntropy to improve the robustness of the classical mean square error (MSE) criterion that is sensitive to outliers. Then an l(1) regularization scheme is imposed on the correntropy to learn robust and sparse representations. Based on the half-quadratic optimization technique, we propose a novel algorithm to solve the nonlinear optimization problem. Second, we develop a new correntropy-based classifier based on the learned regularization scheme for robust object recognition. Extensive experiments over several applications confirm that the correntropy-based l(1) regularization can improve recognition accuracy and receiver operator characteristic curves under noise corruption and occlusion.
    Neural Computation 08/2011; 23(8):2074-100. · 1.76 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: This work presents a systematic study of objective evaluations of abstaining classifications using Information-Theoretic Measures (ITMs). First, we define objective measures for which they do not depend on any free parameter. This definition provides technical simplicity for examining "objectivity" or "subjectivity" directly to classification evaluations. Second, we propose twenty four normalized ITMs, derived from either mutual information, divergence, or cross-entropy, for investigation. Contrary to conventional performance measures that apply empirical formulas based on users' intuitions or preferences, the ITMs are theoretically more sound for realizing objective evaluations of classifications. We apply them to distinguish "error types" and "reject types" in binary classifications without the need for input data of cost terms. Third, to better understand and select the ITMs, we suggest three desirable features for classification assessment measures, which appear more crucial and appealing from the viewpoint of classification applications. Using these features as "meta-measures", we can reveal the advantages and limitations of ITMs from a higher level of evaluation knowledge. Numerical examples are given to corroborate our claims and compare the differences among the proposed measures. The best measure is selected in terms of the meta-measures, and its specific properties regarding error types and reject types are analytically derived.
    Acta Automatica Sinica. 07/2011; 38(7).
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: An informative and discriminative graph plays an important role in the graph-based semi-supervised learning methods. This paper introduces a nonnegative sparse algorithm and its approximated algorithm based on the l 0 l 1 equivalence theory to compute the nonnegative sparse weights of a graph. Hence, the sparse probability graph (SPG) is termed for representing the proposed method. The nonnegative sparse weights in the graph naturally serve as clustering indicators, benefiting for semi-supervised learning. More important, our approximation algorithm speeds up the computation of the nonnegative sparse coding, which is still a bottle-neck for any previous attempts of sparse nonnegative graph learning. And it is much more efficient than using l 1 -norm sparsity technique for learning large scale sparse graph. Finally, for discriminative semi-supervised learning, an adaptive label propagation algorithm is also proposed to iteratively predict the labels of data on the SPG. Promising experimental results show that the nonnegative sparse coding is efficient and effective for discriminative semi-supervised learning.
    The 24th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011, Colorado Springs, CO, USA, 20-25 June 2011; 01/2011
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Recent gait recognition systems often suffer from the challenges including viewing angle variation and large intra-class variations. In order to address these challenges, this paper presents a robust View Transformation Model for gait recognition. Based on the gait energy image, the proposed method establishes a robust view transformation model via robust principal component analysis. Partial least square is used as feature selection method. Compared with the existing methods, the proposed method finds out a shared linear correlated low rank subspace, which brings the advantages that the view transformation model is robust to viewing angle variation, clothing and carrying condition changes. Conducted on the CASIA gait dataset, experimental results show that the proposed method outperforms the other existing methods.
    18th IEEE International Conference on Image Processing, ICIP 2011, Brussels, Belgium, September 11-14, 2011; 01/2011
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Recovering arbitrarily corrupted low-rank matrices arises in computer vision applications, including bioinformatic data analysis and visual tracking. The methods used involve minimizing a combination of nuclear norm and l 1 norm. We show that by replacing the l 1 norm on error items with nonconvex M-estimators, exact recovery of densely corrupted low-rank matrices is possible. The robustness of the proposed method is guaranteed by the M-estimator theory. The multiplicative form of half-quadratic optimization is used to simplify the nonconvex optimization problem so that it can be efficiently solved by iterative regularization scheme. Simulation results corroborate our claims and demonstrate the efficiency of our proposed method under tough conditions.
    The 24th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011, Colorado Springs, CO, USA, 20-25 June 2011; 01/2011
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotational-invariant PCA based on maximum correntropy criterion (MCC). A half-quadratic optimization algorithm is adopted to compute the correntropy objective. At each iteration, the complex optimization problem is reduced to a quadratic problem that can be efficiently solved by a standard optimization method. The proposed method exhibits the following benefits: 1) it is robust to outliers through the mechanism of MCC which can be more theoretically solid than a heuristic rule based on MSE; 2) it requires no assumption about the zero-mean of data for processing and can estimate data mean during optimization; and 3) its optimal solution consists of principal eigenvectors of a robust covariance matrix corresponding to the largest eigenvalues. In addition, kernel techniques are further introduced in the proposed method to deal with nonlinearly distributed data. Numerical results demonstrate that the proposed method can outperform robust rotational-invariant PCAs based on L(1) norm when outliers occur.
    IEEE Transactions on Image Processing 01/2011; 20(6):1485-94. · 3.20 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: In this paper, we present a sparse correntropy framework for computing robust sparse representations of face images for recognition. Compared with the state-of-the-art $l^1$-norm based sparse representation classifier (SRC), which assumes that noise also has a sparse representation, our sparse algorithm is developed based on the maximum correntropy criterion, which is much more insensitive to outliers. In order to develop a more tractable and practical approach, we in particular impose non-negativity constraint on the variables in the maximum correntropy criterion, and develop a half-quadratic optimization technique to approximately maximize the objective function in an alternating way, so that the complex optimization problem is reduced to learning a sparse representation through a weighted linear least squares problem with non-negativity constraint at each iteration. Our extensive experiments demonstrate that the proposed method is more robust and efficient in dealing with the occlusion and corruption problems in face recognition, as compared to the related state-of-the-art methods. In particular, it shows that the proposed method can improve both recognition accuracy and receiver operator characteristic (ROC) curves, while the computational cost is much lower than the SRC algorithms.
    IEEE Transactions on Software Engineering 11/2010; · 2.59 Impact Factor
  • [show abstract] [hide abstract]
    ABSTRACT: Sparse signal representation proposes a novel insight to solve face recognition problem. Based on the sparse assumption that a new object can be sparsely represented by other objects, we propose a simple yet efficient direct sparse nearest feature classifier to deal with the problem of automatically real-time face recognition. Firstly, we present a new method, which calculates an approximate sparse code to alleviate the extrapolation and interpolation inaccuracy in nearest feature classifier. Secondly, a sparse score normalization method is developed to normalize the calculated scores and to achieve a high receiver operator characteristic (ROC) curve. Experiments on FRGC and PIE face databases show that our method can get comparable results against sparse representation-based classification on both recognition rate and ROC curve. KeywordsNearest feature classifier-Sparse representation-Receiver operator characteristic-Face recognition
    09/2010: pages 386-394;
  • Source
    Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2010, Atlanta, Georgia, USA, July 11-15, 2010; 01/2010

Publication Stats

133 Citations
4k Downloads
3k Views
17.68 Total Impact Points

Institutions

  • 2008–2014
    • Northeast Institute of Geography and Agroecology
      • Institute of Automation
      Beijing, Beijing Shi, China
  • 2006–2013
    • Chinese Academy of Sciences
      • • Institute of Automation
      • • National Pattern Recognition Laboratory
      Peping, Beijing, China
  • 2012
    • Rutgers, The State University of New Jersey
      New Brunswick, New Jersey, United States
  • 2010–2011
    • Dalian University of Technology
      • School of Electronic and Information Engineering
      Dalian, Liaoning, China