Miao Cheng

University of Macau, Macao, Macau, Macao

Are you Miao Cheng?

Claim your profile

Publications (10)4.83 Total impact

  • Miao Cheng, Chi-Man Pun, Yuan Yan Tang
    [Show abstract] [Hide abstract]
    ABSTRACT: Nonnegative learning aims to learn the part-based representation of nonnegative data and receives much attention in recent years. Nonnegative matrix factorization has been popular to make nonnegative learning applicable, which can also be explained as an optimization problem with bound constraints. In order to exploit the informative components hidden in nonnegative patterns, a novel nonnegative learning method, termed nonnegative class-specific entropy component analysis, is developed in this work. Distinguish from the existing methods, the proposed method aims to conduct the general objective functions, and the conjugate gradient technique is applied to enhance the iterative optimization. In view of the development, a general nonnegative learning framework is presented to deal with the nonnegative optimization problem with general objective costs. Owing to the general objective costs and the nonnegative bound constraints, the diseased nonnegative learning problem usually occurs. To address this limitation, a modified line search criterion is proposed, which prevents the null trap with insured conditions while keeping the feasible step descendent. In addition, the numerical stopping rule is employed to achieve optimized efficiency, instead of the popular gradient-based one. Experiments on face recognition with varieties of conditions reveal that the proposed method possesses better performance over other methods.
    Formal Pattern Analysis & Applications 02/2014; · 0.81 Impact Factor
  • Miao Cheng, Yuan Yan Tang, Chi-Man Pun
    [Show abstract] [Hide abstract]
    ABSTRACT: As an important step in machine learning and information processing, feature extraction has widely received attention in the past decades. For high-dimensional data, feature extraction problem usually comes down to exploit the intrinsic pattern information with dimensionality reduction. Since nonparametric approaches are applicable without parameter installation, they are more preferred in many real-world applications. In this work, a novel approach to direct maximum margin alignment (DMMA), is proposed for nonparametric feature reduction and extraction. Though there has been the straightforward solution for the discriminative ratio based subspace selection, such type of solution is still unavailable for maximum margin alignment. In terms of the kernel-view idea, DMMA can be performed by bringing in sample kernel, while the computational efficiency is achievable. Experiments on pattern recognition show that the proposed method is able to obtain comparable performance with several state-of-the-art algorithms.
    01/2011;
  • Chi-Man Pun, Ning-Yu An, Miao Cheng
    [Show abstract] [Hide abstract]
    ABSTRACT: An image segmentation approach by improved watershed partition and DCT energy compaction has been proposed in this paper. The proposed energy compaction, which expresses the local texture of an image area, is derived by exploiting the discrete cosine transform. The algorithm is a hybrid segmentation technique which is composed of three stages. First, the watershed transform is utilized by preprocessing techniques: edge detection and marker in order to partition the image into several small disjoint patches, while the three features: region size, mean and variance are used to calculate region energy for combination. Then in the second merging stage, the DCT transform is used for energy compaction which is a criterion for texture comparison and region merging. Finally the image can be segmented into several partitions. The obtained results show good segmentation robustness and efficiency, when compared to other state of the art image segmentation algorithms.
    Eighth International Conference on Computer Graphics, Imaging and Visualization, CGIV 2011, Singapore, August 17-19, 2011; 01/2011 · 1.47 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Derived from the traditional manifold learning algorithms, local discriminant analysis methods identify the underlying submanifold structures while employing discriminative information for dimensionality reduction. Mathematically, they can all be unified into a graph embedding framework with different construction criteria. However, such learning algorithms are limited by the curse-of-dimensionality if the original data lie on the high-dimensional manifold. Different from the existing algorithms, we consider the discriminant embedding as a kernel analysis approach in the sample space, and a kernel-view based discriminant method is proposed for the embedded feature extraction, where both PCA pre-processing and the pruning of data can be avoided. Extensive experiments on the high-dimensional data sets show the robustness and outstanding performance of our proposed method.
    Neurocomputing. 01/2011; 74:1478-1484.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Dimensionality reduction and incremental learning have recently received broad attention in many applications of data mining, pattern recognition, and information retrieval. Inspired by the concept of manifold learning, many discriminant embedding techniques have been introduced to seek low-dimensional discriminative manifold structure in the high-dimensional space for feature reduction and classification. However, such graph-embedding framework-based subspace methods usually confront two limitations: (1) since there is no available updating rule for local discriminant analysis with the additive data, it is difficult to design incremental learning algorithm and (2) the small sample size (SSS) problem usually occurs if the original data exist in very high-dimensional space. To overcome these problems, this paper devises a supervised learning method, called local discriminant subspace embedding (LDSE), to extract discriminative features. Then, the incremental-mode algorithm, incremental LDSE (ILDSE), is proposed to learn the local discriminant subspace with the newly inserted data, which applies incremental learning extension to the batch LDSE algorithm by employing the idea of singular value-decomposition (SVD) updating algorithm. Furthermore, the SSS problem is avoided in our method for the high-dimensional data and the benchmark incremental learning experiments on face recognition show that ILDSE bears much less computational cost compared with the batch algorithm.
    IEEE Transactions on Systems Man and Cybernetics Part C (Applications and Reviews) 01/2010; 40:580-591. · 2.55 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: For pattern analysis and recognition, it is necessary to find the meaningful low-dimensional representation of data in general. In the past decades, subspace learning methods have been regarded as the useful tools for feature extraction and dimensionality reduction. Without loss of generality, the linear subspace learning algorithms can be explained as the enhancement of the affinity and repulsion of several data pairs. Based on this point of view, a novel linear discriminant method, termed Marginal Discriminant Projections (MDP), is proposed to learn the marginal subspace. Rather than the existing marginal learning method, the maladjusted learning problem is alleviated by adopting a hierarchical fuzzy clustering approach, where the discriminative margin can be found adaptively and the iterative objective optimization is avoided. In addition, the proposed method is immune from the well-known curse of dimensionality problem, with respect to the presented subspace learning framework. Experiments on extensive datasets demonstrate the effectiveness of the proposed MDP for discriminative learning and recognition tasks.
    Pattern Recognition Letters. 01/2010; 31:1965-1974.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In order to exploit the informative components hidden in nonnegative matrix factorization, an information theoretic learning method, termed ITNMF, is presented. Different from the existing NMF methods, the proposed method is able to handle the general objective optimization, and takes the conjugate gradient technique to enhance the iterative optimization. To tackle the null matrix factorization problem, the line search approach adopts the insured conditions while keeping the feasible step descendent. In addition, the function value based stopping rule is employed to achieve optimized efficiency. Experiments of pattern classification on the data sets under variant pose and illumination conditions reveal that the proposed method can outperform the existing methods.
    01/2010;
  • IJPRAI. 01/2009; 23:1161-1177.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Many applications in machine learning and computer vision come down to feature representation and reduction. Manifold learning seeks the intrinsic low-dimensional manifold structure hidden in the high-dimensional data. In the past few years, many local discriminant analysis methods have been proposed to exploit the discriminative submanifold structure by extending the manifold learning idea to supervised ones. Particularly, marginal Fisher analysis (MFA) finds the local interclass margin for feature extraction and classification. However, since the limited data pairs are employed to determine the discriminative margin, such method usually suffers from the maladjusted learning as we introduced in this paper. To improve the discriminant ability of MFA, we incorporate the marginal Fisher idea with the global between-class separability criterion (BCSC), and propose a novel supervised learning method, called local and global margin projections (LGMP), where the maladjusted learning problem can be alleviated. Experimental evaluation shows that the proposed LGMP outperforms the original MFA.
    Neurocomputing. 01/2009;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Inspired by the concept of manifold learning, the discriminant embedding technologies aim to exploit low dimensional discriminant manifold structure in the high dimensional space for dimension reduction and classification. However, such graph embedding framework based techniques usually suffer from the large complexity and small sample size (SSS) problem. To address the problem, we reformulate the Laplacian matrix and propose a regularized neighborhood discriminant analysis method, namely RNDA, to discover the local discriminant information, which follows similar approach to regularized LDA. Compared with other discriminant embedding techniques, RNDA achieves efficiency by employing the QR decomposition as a pre-step. Experiments on face databases are presented to show the outstanding performance of the proposed method.
    Wavelet Analysis and Pattern Recognition, 2008. ICWAPR '08. International Conference on; 10/2008