Ran He

Dalian University of Technology, Lü-ta-shih, Liaoning, China

Are you Ran He?

Claim your profile

Publications (83)

  • Shu Zhang · Ran He · Zhenan Sun · Tieniu Tan
    Conference Paper · Jun 2016
  • Yanbo Fan · Ran He · Jian Liang · Bao-Gang Hu
    [Show abstract] [Hide abstract] ABSTRACT: Self-paced learning (SPL) mimics the cognitive mechanism of humans and animals that gradually learns from easy to hard samples. One key issue in SPL is to obtain better weighting strategy that is determined by the minimizer functions. Existing methods usually pursue this by artificially designing the explicit form of regularizers. In this paper, we focus on the minimizer functions, and study a group of new regularizers, named self-paced implicit regularizers that are derived from convex conjugacy. Based on the multiplicative form of half-quadratic optimization, convex and non-convex functions induced minimizer functions for the implicit regularizers are developed. And a general framework (named SPL-IR) for SPL is developed accordingly. We further analyze the relation between SPLIR and half-quadratic optimization. We implement SPL-IR to matrix factorization and multi-view clustering. Experimental results on both synthetic and real-world databases corroborate our ideas and demonstrate the effectiveness of implicit regularizers.
    Article · Jun 2016
  • Source
    Jun Guo · Yanqing Guo · Xiangwei Kong · [...] · Ran He
    [Show abstract] [Hide abstract] ABSTRACT: This is a poster presented in Vision And Learning SEminar (VALSE) in Wuhan, China.
    Full-text Dataset · Apr 2016
  • Linlin Cao · Ran He · Bao-Gang Hu
    [Show abstract] [Hide abstract] ABSTRACT: This work is a further study on the Generalized Constraint Neural Network (GCNN) model [1], [2]. Two challenges are encountered in the study, that is, to embed any type of prior information and to select its imposing schemes. The work focuses on the second challenge and studies a new constraint imposing scheme for equality constraints. A new method called locally imposing function (LIF) is proposed to provide a local correction to the GCNN prediction function, which therefore falls within Locally Imposing Scheme (LIS). In comparison, the conventional Lagrange multiplier method is considered as Globally Imposing Scheme (GIS) because its added constraint term exhibits a global impact to its objective function. Two advantages are gained from LIS over GIS. First, LIS enables constraints to fire locally and explicitly in the domain only where they need on the prediction function. Second, constraints can be implemented within a network setting directly. We attempt to interpret several constraint methods graphically from a viewpoint of the locality principle. Numerical examples confirm the advantages of the proposed method. In solving boundary value problems with Dirichlet and Neumann constraints, the GCNN model with LIF is possible to achieve an exact satisfaction of the constraints.
    Article · Apr 2016
  • Source
    Jun Guo · Yanqing Guo · Xiangwei Kong · [...] · Ran He
    [Show abstract] [Hide abstract] ABSTRACT: Dictionary learning (DL) has been successfully applied to various pattern classification tasks in recent years. However , analysis dictionary learning (ADL), as a major branch of DL, has not yet been fully exploited in classification due to its poor discriminability. This paper presents a novel DL method, namely Discriminative Analysis Dictionary Learning (DADL), to improve the classification performance of ADL. First, a code consistent term is integrated into the basic analysis model to improve discriminability. Second, a triplet-constraint-based local topology preserving loss function is introduced to capture the discriminative geometrical structures embedded in data. Third, correntropy induced metric is employed as a robust measure to better control outliers for classification. Then, half-quadratic minimization and alternate search strategy are used to speed up the optimization process so that there exist closed-form solutions in each alternating minimization stage. Experiments on several commonly used databases show that our proposed method not only significantly improves the discriminative ability of ADL, but also outperforms state-of-the-art synthesis DL methods.
    Full-text Conference Paper · Feb 2016
  • [Show abstract] [Hide abstract] ABSTRACT: Subspace clustering has achieved great success in many computer vision applications. However, most subspace clustering algorithms require well aligned data samples, which is often not straightforward to achieve. This paper proposes a Transformation Invariant Subspace Clustering framework by jointly aligning data samples and learning subspace representation. By alignment, the transformed data samples become highly correlated and a better affinity matrix can be obtained. The joint problem can be reduced to a sequence of Least Squares Regression problems, which can be efficiently solved. We verify the effectiveness of the proposed method with extensive experiments on unaligned real data, demonstrating its higher clustering accuracy than the state-of-the-art subspace clustering and transformation invariant clustering algorithms.
    Article · Feb 2016
  • Conference Paper · Jan 2016
  • Kaiye Wang · Ran He · Liang Wang · [...] · Tieniu Tan
    [Show abstract] [Hide abstract] ABSTRACT: Cross-modal retrieval has recently drawn much attention due to the widespread existence of multimodal data. It takes one type of data as the query to retrieve relevant data objects of another type, and generally involves two basic problems: the measure of relevance and coupled feature selection. Most previous methods just focus on solving the first problem. In this paper, we aim to deal with both problems in a novel joint learning framework. To address the first problem, we learn projection matrices to map multimodal data into a common subspace, in which the similarity between different modalities of data can be measured. In the learning procedure, the ℓ21-norm penalties are imposed on the projection matrices separately to solve the second problem, which selects relevant and discriminative features from different feature spaces simultaneously. A multimodal graph regularization term is further imposed on the projected data, which preserves the inter-modality and intra-modality similarity relationships. An iterative algorithm is presented to solve the proposed joint learning problem, along with its convergence analysis. Experimental results on cross-modal retrieval tasks demonstrate that the proposed method outperforms the state-of-the-art subspace approaches.
    Article · Dec 2015 · IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Ran He · Liang Wang · Zhenan Sun · [...] · Bo Li
    [Show abstract] [Hide abstract] ABSTRACT: This paper addresses the problem of grouping the data points sampled from a union of multiple subspaces in the presence of outliers. Information theoretic objective functions are proposed to combine structured low-rank representations (LRRs) to capture the global structure of data and information theoretic measures to handle outliers. In theoretical part, we point out that group sparsity-induced measures (ℓ2,1-norm, ℓα-norm, and correntropy) can be justified from the viewpoint of half-quadratic (HQ) optimization, which facilitates both convergence study and algorithmic development. In particular, a general formulation is accordingly proposed to unify HQ-based group sparsity methods into a common framework. In algorithmic part, we develop information theoretic subspace clustering methods via correntropy. With the help of Parzen window estimation, correntropy is used to handle either outliers under any distributions or sample-specific errors in data. Pairwise link constraints are further treated as a prior structure of LRRs. Based on the HQ framework, iterative algorithms are developed to solve the nonconvex information theoretic loss functions. Experimental results on three benchmark databases show that our methods can further improve the robustness of LRR subspace clustering and outperform other state-of-the-art subspace clustering methods.
    Article · Dec 2015 · IEEE Transactions on Neural Networks and Learning Systems
  • Xiang Wu · Ran He · Zhenan Sun
    [Show abstract] [Hide abstract] ABSTRACT: Convolution neural network (CNN) has significantly pushed forward the development of face recognition techniques. To achieve ultimate accuracy, CNN models tend to be deeper or multiple local facial patch ensemble, which result in a waste of time and space. To alleviate this issue, this paper studies a lightened CNN framework to learn a compact embedding for face representation. First, we introduce the concept of maxout in the fully connected layer to the convolution layer, which leads to a new activation function, named Max-Feature-Map (MFM). Compared with widely used ReLU, MFM can simultaneously capture compact representation and competitive information. Then, one shallow CNN model is constructed by 4 convolution layers and totally contains about 4M parameters; and the other is constructed by reducing the kernel size of convolution layers and adding Network in Network (NIN) layers between convolution layers based on the previous one. These models are trained on the CASIA-WebFace dataset and evaluated on the LFW and YTF datasets. Experimental results show that the proposed models achieve state-of-the-art results. At the same time, a reduction of computational cost is reached by over 9 times in comparison with the released VGG model.
    Article · Nov 2015
  • Lingxiao Song · Man Zhang · Qi Li · [...] · Ran He
    Conference Paper · Nov 2015
  • Shu Zhang · Man Zhang · Qi Li · [...] · Ran He
    Conference Paper · Nov 2015
  • Dong Wang · Ran He · Liang Wang · Tieniu Tan
    Conference Paper · Nov 2015
  • Dong Cao · Ran He · Zhenan Sun · Tieniu Tan
    Conference Paper · Nov 2015
  • Yanbo Fan · Ran He · Bao-Gang Hu
    Conference Paper · Nov 2015
  • Jian Liang · Dong Cao · Ran He · [...] · Tieniu Tan
    Conference Paper · Nov 2015
  • Lingxiao Song · Man Zhang · Zhenan Sun · [...] · Ran He
    [Show abstract] [Hide abstract] ABSTRACT: Greedy subspace clustering methods provide an efficient way to cluster large-scale multimedia datasets. However, these methods do not guarantee a global optimum and their clustering performance mainly depends on their initializations. To alleviate this initialization problem, this paper proposes a two-step greedy strategy by exploring proper neighbors that span an initial subspace. Firstly, for each data point, we seek a sparse representation with respect to its nearest neighbors. The data points corresponding to nonzero entries in the learning representation form an initial subspace, which potentially rejects bad or redundant data points. Secondly, the subspace is updated by adding an orthogonal basis involved with the newly added data points. Experimental results on real-world applications demonstrate that our method can significantly improve the clustering accuracy of greedy subspace clustering methods without scarifying much computational time.
    Chapter · Sep 2015
  • Shu Zhang · Jian Liang · Ran He · Zhenan Sun
    [Show abstract] [Hide abstract] ABSTRACT: Learning based hashing techniques have attracted broad research interests in the Big Media research area. They aim to learn compact binary codes which can preserve semantic similarity in the Hamming embedding. However, the discrete constraints imposed on binary codes typically make hashing optimizations very challenging. In this paper, we present a code consistent hashing (CCH) algorithm to learn discrete binary hash codes. To form a simple yet efficient hashing objective function, we introduce a new code consistency constraint to leverage discriminative information and propose to utilize the Hadamard code which favors an information-theoretic criterion as the class prototype. By keeping the discrete constraint and introducing an orthogonal constraint, our objective function can be minimized efficiently. Experimental results on three benchmark datasets demonstrate that the proposed CCH outperforms state-of-the-art hashing methods in both image retrieval and classification tasks, especially with short binary codes.
    Article · Sep 2015
  • [Show abstract] [Hide abstract] ABSTRACT: Subspace clustering has important and wide applications in computer vision and pattern recognition. It is a challenging task to learn low-dimensional subspace structures due to complex noise existing in high-dimensional data. Complex noise has much more complex statistical structures, and is neither Gaussian nor Laplacian noise. Recent subspace clustering methods usually assume a sparse representation of the errors incurred by noise and correct these errors iteratively. However large corruptions incurred by complex noise can not be well addressed by these methods. A novel optimization model for robust subspace clustering is proposed in this paper. Its objective function mainly includes two parts. The first part aims to achieve a sparse representation of each high-dimensional data point with other data points. The second part aims to maximize the correntropy between a given data point and its low-dimensional representation with other points. Correntropy is a robust measure so that the influence of large corruptions on subspace clustering can be greatly suppressed. An extension of pairwise link constraints is also proposed as prior information to deal with complex noise. Half-quadratic minimization is provided as an efficient solution to the proposed robust subspace clustering formulations. Experimental results on three commonly used datasets show that our method outperforms state-of-the-art subspace clustering methods.
    Article · Jul 2015 · IEEE Transactions on Image Processing
  • [Show abstract] [Hide abstract] ABSTRACT: This paper presents a structured ordinal measure method for video-based face recognition that simultaneously learns ordinal filters and structured ordinal features. The problem is posed as a non-convex integer program problem that includes two parts. The first part learns stable ordinal filters to project video data into a large-margin ordinal space. The second seeks self-correcting and discrete codes by balancing the projected data and a rank-one ordinal matrix in a structured low-rank way. Unsupervised and supervised structures are considered for the ordinal matrix. In addition, as a complement to hierarchical structures, deep feature representations are integrated into our method to enhance coding stability. An alternating minimization method is employed to handle the discrete and low-rank constraints, yielding high-quality codes that capture prior structures well. Experimental results on three commonly used face video databases show that our method with a simple voting classifier can achieve state-of-the-art recognition rates using fewer features and samples.
    Article · Jul 2015

Publication Stats

1k Citations

Institutions

  • 2010-2012
    • Dalian University of Technology
      • School of Electronic and Information Engineering
      Lü-ta-shih, Liaoning, China
  • 2006-2012
    • Chinese Academy of Sciences
      • • Institute of Automation
      • • National Pattern Recognition Laboratory
      Peping, Beijing, China