Qingshan Liu

Nanjing University of Science and Technology, Nan-ching, Jiangsu Sheng, China

Are you Qingshan Liu?

Claim your profile

Publications (135)94.51 Total impact

  • Qingshan Liu · Jiankang Deng · Dacheng Tao ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Localizing facial landmarks is a fundamental step in facial image analysis. However, the problem continues to be challenging due to the large variability in expression, illumination, and pose, and the existence of occlusions in real-world face images. In this paper, we present a dual sparse constrained cascade regression model for robust face alignment. Instead of using the least-squares method during the training process of regressors, sparse constraint is introduced to select robust features and compress the size of the model. Moreover, sparse shape constraint is incorporated between each cascade regression, and the explicit shape constraints are able to suppress the ambiguity in local features. To improve the model's adaptation to large pose variation, face pose is estimated by five fiducial landmarks located by deep convolutional neuron network, which is used to adaptively design the cascade regression model. To our best knowledge, this is the first attempt to fuse explicit shape constraint (sparse shape constraint) and implicit context information (sparse feature selection) for robust face alignment in the framework of cascade regression. Extensive experiments on nine challenging wild data sets demonstrate the advantages of the proposed method over the state-of-the-art methods.
    IEEE Transactions on Image Processing 11/2015; DOI:10.1109/TIP.2015.2502485 · 3.63 Impact Factor
  • Ting Yuan · Jian Cheng · Xi Zhang · Qingshan Liu · Hanqing Lu ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Incorporating the influence of social relationships effectively is fundamental to social recommendation (SR). However, most of the SR algorithms are based on the homophily assumption, where they ignored friends' different influence on users and users' different willingness to be influenced, which may make improper influence information integrated and harm the recommendation results. To address this, we propose a unified framework to properly incorporate the influence of social relationships into recommendation by the guidance of buddy (friends who have strong influence on user) and susceptibility (the willingness to be influenced) mining. Specifically, the Social Influence Propagation (SIP) method is proposed to identify each user's buddies and susceptibility and the Social Influence based Recommendation model is proposed to generate the final recommendation. Experiments on the real-world data demonstrate that the proposed framework can better utilize users' social relationships, resulting in increased recommendation accuracy.
  • Xiao-Tong Yuan · Zhenzhen Wang · Jiankang Deng · Qingshan Liu ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Explicit feature mapping is an appealing way to linearize additive kernels, such as ?? kernel for training large-scale support vector machines (SVMs). Although accurate in approximation, feature mapping could pose computational challenges in high-dimensional settings as it expands the original features to a higher dimensional space. To handle this issue in the context of ?? kernel SVMs learning, we introduce a simple yet efficient method to approximately linearize ?? kernel through random feature maps. The main idea is to use sparse random projection to reduce the dimensionality of feature maps while preserving their approximation capability to the original kernel. We provide approximation error bound for the proposed method. Furthermore, we extend our method to ?? multiple kernel SVMs learning. Extensive experiments on large-scale image classification tasks confirm that the proposed approach is able to significantly speed up the training process of the ?? kernel SVMs at almost no cost of testing accuracy.
    IEEE transactions on neural networks and learning systems 09/2015; DOI:10.1109/TNNLS.2015.2476659 · 4.29 Impact Factor
  • Source
    Renlong Hang · Qingshan Liu · Huihui Song · Yubao Sun ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Spatial–spectral feature fusion is well acknowledged as an effective method for hyperspectral (HS) image classification. Many previous studies have been devoted to this subject. However, these methods often regard the spatial–spectral high-dimensional data as 1-D vector and then extract informative features for classification. In this paper, we propose a new HS image classification method. Specifically, matrix-based spatial–spectral feature representation is designed for each pixel to capture the local spatial contextual and the spectral information of all the bands, which can well preserve the spatial–spectral correlation. Then, matrix-based discriminant analysis is adopted to learn the discriminative feature subspace for classification. To further improve the performance of discriminative subspace, a random sampling technique is used to produce a subspace ensemble for final HS image classification. Experiments are conducted on three HS remote sensing data sets acquired by different sensors, and experimental results demonstrate the efficiency of the proposed method.
    IEEE Transactions on Geoscience and Remote Sensing 09/2015; DOI:10.1109/TGRS.2015.2465899 · 3.51 Impact Factor
  • Qingshan Liu · Jing Yang · Kaihua Zhang · Yi Wu ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Recently, the compressive tracking (CT) method has attracted much attention due to its high efficiency, but it cannot well deal with the large scale target appearance variations due to its data-independent random projection matrix that results in less discriminative features. To address this issue, in this paper we propose an adaptive CT approach, which selects the most discriminative features to design an effective appearance model. Our method significantly improves CT in three aspects: Firstly, the most discriminative features are selected via an online vector boosting method. Secondly, the object representation is updated in an effective online manner, which preserves the stable features while filtering out the noisy ones. Finally, a simple and effective trajectory rectification approach is adopted that can make the estimated location more accurate. Extensive experiments on the CVPR2013 tracking benchmark demonstrate the superior performance of our algorithm compared over state-of-the-art tracking algorithms.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper focuses on the problem of simultaneous sample and feature selection for machine learning in a fully unsupervised setting. Though most existing works tackle these two problems separately that derives two well-studied sub-areas namely active learning and feature selection, a unified approach is inspirational since they are often interleaved with each other. Noisy and high-dimensional features will bring adverse effect on sample selection, while `good' samples will be beneficial to feature selection. We present a unified framework to conduct active learning and feature selection simultaneously. From the data reconstruction perspective, both the selected samples and features can best approximate the original dataset respectively, such that the selected samples characterized by the selected features are very representative. Additionally our method is one-shot without iteratively selecting samples for progressive labeling. Thus our model is especially suitable when the initial labeled samples are scarce or totally absent, which existing works hardly address particularly for simultaneous feature selection. To alleviate the NP-hardness of the raw problem, the proposed formulation involves a convex but non-smooth optimization problem. We solve it efficiently by an iterative algorithm, and prove its global convergence. Experiments on publicly available datasets validate that our method is promising compared with the state-of-the-arts.
  • Zhenzhen Wang · Xiao-Tong Yuan · Qingshan Liu ·
    [Show abstract] [Hide abstract]
    ABSTRACT: The chi(2) kernel based support vector machines (SVMs) have achieved impressive performances in many image and text classification tasks. As a nonlinear kernel method, however, it does not scale well to large scale data, because the computation of the chi(2) kernel matrix is intractable. To address this challenge, we propose a sparse random projection method to linearly approximate the chi(2) kernel, so that the original nonlinear SVMs could be converted to linear ones. Then we are able to make use of the existing large scale linear SVMs training method efficiently. Experimental results on three popular image data sets (MNIST, rcv1.binary, Caltech-101) show that the proposed method can significantly improve the learning efficiency of the chi(2) kernel SVMs and the improvement comes at almost no cost of accuracy.
    Neurocomputing 03/2015; 151:327-332. DOI:10.1016/j.neucom.2014.09.032 · 2.08 Impact Factor
  • Huihui Song · Bo Huang · Qingshan Liu · Kaihua Zhang ·
    [Show abstract] [Hide abstract]
    ABSTRACT: To take advantage of the wide swath width of Landsat Thematic Mapper (TM)/Enhanced Thematic Mapper Plus (ETM+) images and the high spatial resolution of Système Pour l'Observation de la Terre 5 (SPOT5) images, we present a learning-based super-resolution method to fuse these two data types. The fused images are expected to be characterized by the swath width of TM/ETM+ images and the spatial resolution of SPOT5 images. To this end, we first model the imaging process from a SPOT image to a TM/ETM+ image at their corresponding bands, by building an image degradation model via blurring and downsampling operations. With this degradation model, we can generate a simulated Landsat image from each SPOT5 image, thereby avoiding the requirement for geometric coregistration for the two input images. Then, band by band, image fusion can be implemented in two stages: 1) learning a dictionary pair representing the high- and low-resolution details from the given SPOT5 and the simulated TM/ETM+ images; 2) super-resolving the input Landsat images based on the dictionary pair and a sparse coding algorithm. It is noteworthy that the proposed method can also deal with the conventional spatial and spectral fusion of TM/ETM+ and SPOT5 images by using the learned dictionary pairs. To examine the performance of the proposed method of fusing the swath width of TM/ETM+ and the spatial resolution of SPOT5, we illustrate the fusion results on the actual TM images and compare with several classic pansharpening methods by assuming that the corresponding SPOT5 panchromatic image exists. Furthermore, we implement the classification experiments on both actual images and fusion results to demonstrate the benefits of the proposed method for further classification applications.
    IEEE Transactions on Geoscience and Remote Sensing 03/2015; 53(3):1195-1204. DOI:10.1109/TGRS.2014.2335818 · 3.51 Impact Factor
  • Jiankang deng · Yubao Sun · Qingshan Liu · Hanqing Lu ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Localizing facial landmarks is an essential prerequisite to facial image analysis. However, due to the large variability in expression, illumination, pose and the existence of occlusions in the real-world face images, how to localize facial landmarks more efficiently is still a challenging problem. In this paper, we present a low-rank driven regression model for robust facial landmark localization. Our approach consists of low-rank face frontalization and sparse shape constrained cascade regression steps, which lies on, (1) in terms of the low rank prior of face image, we recover such a low-rank face from its deformed image and the associated deformation despite significant distortion and corruption. Alignment of the recovered frontal face image is more simple and effective. And (2) in terms of the sparse coding of face shape on the shape dictionary learnt from training data, sparse shape constrained cascade regression model is proposed to simultaneously suppress the ambiguity in local features and outlier caused by occlusion, and sparse residual error deviated from low-rank face texture is also utilized to predict the occlusion area. Extensive results on several wild benchmarks such as COFW, LFPW and Helen demonstrate that the proposed method is robust to facial occlusions, pose variations and exaggerated facial expressions.
    Neurocomputing 03/2015; 151:196-206. DOI:10.1016/j.neucom.2014.09.052 · 2.08 Impact Factor
  • Source
    Kaihua Zhang · Qingshan Liu · Yi Wu · Ming-Hsuan Yang ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Deep networks have been successfully applied to visual tracking by learning a generic representation offline from numerous training images. However the offline training is time-consuming and the learned generic representation may be less discriminative for tracking specific objects. In this paper we present that, even without learning, simple convolutional networks can be powerful enough to develop a robust representation for visual tracking. In the first frame, we randomly extract a set of normalized patches from the target region as filters, which define a set of feature maps in the subsequent frames. These maps measure similarities between each filter and the useful local intensity patterns across the target, thereby encoding its local structural information. Furthermore, all the maps form together a global representation, which maintains the relative geometric positions of the local intensity patterns, and hence the inner geometric layout of the target is also well preserved. A simple and effective online strategy is adopted to update the representation, allowing it to robustly adapt to target appearance variations. Our convolution networks have surprisingly lightweight structure, yet perform favorably against several state-of-the-art methods on a large benchmark dataset with 50 challenging videos.
  • Guangcan Liu · Huan Xu · Jinhui Tang · Qingshan Liu · Shuicheng Yan ·

    IEEE Transactions on Pattern Analysis and Machine Intelligence 01/2015; DOI:10.1109/TPAMI.2015.2453969 · 5.78 Impact Factor
  • Source
    Changsheng Li · Weishan Dong · Qingshan Liu · Xin Zhang ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Online multiple-output regression is an important machine learning technique for modeling, predicting, and compressing multi-dimensional correlated data streams. In this paper, we propose a novel online multiple-output regression method, called MORES, for streaming data. MORES can \emph{dynamically} learn the structure of the regression coefficients to facilitate the model's continuous refinement. We observe that limited expressive ability of the regression model, especially in the preliminary stage of online update, often leads to the variables in the residual errors being dependent. In light of this point, MORES intends to \emph{dynamically} learn and leverage the structure of the residual errors to improve the prediction accuracy. Moreover, we define three statistical variables to \emph{exactly} represent all the seen samples for \emph{incrementally} calculating prediction loss in each online update round, which can avoid loading all the training data into memory for updating model, and also effectively prevent drastic fluctuation of the model in the presence of noise. Furthermore, we introduce a forgetting factor to set different weights on samples so as to track the data streams' evolving characteristics quickly from the latest samples. Experiments on three real-world datasets validate the effectiveness and efficiency of the proposed method.
  • Source
    Changsheng Li · Qingshan Liu · Weishan Dong · Xin Zhang · Lin Yang ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a new max-margin based discriminative feature learning method. Specifically, we aim at learning a low-dimensional feature representation, so as to maximize the global margin of the data and make the samples from the same class as close as possible. In order to enhance the robustness to noise, a $l_{2,1}$ norm constraint is introduced to make the transformation matrix in group sparsity. In addition, for multi-class classification tasks, we further intend to learn and leverage the correlation relationships among multiple class tasks for assisting in learning discriminative features. The experimental results demonstrate the power of the proposed method against the related state-of-the-art methods.
  • Kaihua Zhang · Qingshan Liu · Huihui Song · Xuelong Li ·
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.
    Cybernetics, IEEE Transactions on 10/2014; 45(8). DOI:10.1109/TCYB.2014.2352343 · 3.47 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a new method to learn the regression coefficient matrix for multiple-output regression, which is inspired by multi-task learning. We attempt to incorporate high-order structure information among the regression coefficients into the estimated process of regression coefficient matrix, which is of great importance for multiple-output regression. Meanwhile, we also intend to describe the output structure with noise covariance matrix to assist in learning model parameters. Taking account of the real-world data often corrupted by noise, we place a constraint of minimizing norm on regression coefficient matrix to make it robust to noise. The experiments are conducted on three public available datasets, and the experimental results demonstrate the power of the proposed method against the state-of-the-art methods.
    The 22nd International Conference on Pattern Recognition (ICPR); 08/2014
  • Changsheng Li · Qingshan Liu · Jing Liu · Hanqing Lu ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Recently, distance metric learning (DML) has attracted much attention in image retrieval, but most previous methods only work for image classification and clustering tasks. In this brief, we focus on designing ordinal DML algorithms for image ranking tasks, by which the rank levels among the images can be well measured. We first present a linear ordinal Mahalanobis DML model that tries to preserve both the local geometry information and the ordinal relationship of the data. Then, we develop a nonlinear DML method by kernelizing the above model, considering of real-world image data with nonlinear structures. To further improve the ranking performance, we finally derive a multiple kernel DML approach inspired by the idea of multiple-kernel learning that performs different kernel operators on different kinds of image features. Extensive experiments on four benchmarks demonstrate the power of the proposed algorithms against some related state-of-the-art methods.
    IEEE transactions on neural networks and learning systems 08/2014; 26(7). DOI:10.1109/TNNLS.2014.2339100 · 4.29 Impact Factor
  • Yubao Sun · Qingshan Liu · Jinhui Tang · Dacheng Tao ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In recent years, sparse representation has been widely used in object recognition applications. How to learn the dictionary is a key issue to sparse representation. A popular method is to use l1 norm as the sparsity measurement of representation coefficients for dictionary learning. However, the l1 norm treats each atom in the dictionary independently, so the learned dictionary cannot well capture the multi-subspaces structural information of the data. Additionally, the learned subdictionary for each class usually shares some common atoms, which weakens the discriminative ability of the reconstruction error of each sub-dictionary. This paper presents a new dictionary learning model to improve sparse representation for image classification, which targets at learning a class-specific sub-dictionary for each class and a common sub-dictionary shared by all classes. The model is composed of a discriminative fidelity, a weighted group sparse constraint and a sub-dictionary incoherence term. The discriminative fidelity encourages each class-specific sub-dictionary to sparsely represent the samples in the corresponding class. The weighted group sparse constraint term aims at capturing the structural information of the data. The sub-dictionary incoherence term is to make all sub-dictionaries independent as much as possible. Because the common sub-dictionary represents features shared by all classes, we only use the reconstruction error of each class-specific sub-dictionary for classification. Extensive experiments are conducted on several public image databases, and the experimental results demonstrate the power of the proposed method, compared to the state-of-the-arts.
    IEEE Transactions on Image Processing 06/2014; 23(9). DOI:10.1109/TIP.2014.2331760 · 3.63 Impact Factor
  • Source
    Jun Xu · Renlong Hang · Qingshan Liu ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Active learning (AL) has been shown to be a useful approach to improving the efficiency of the classification process for remote-sensing imagery. Current AL methods are essentially based on pixel-wise classification. In this paper, a new patch-based active learning (PTAL) framework is proposed for spectral-spatial classification on hyperspectral remote-sensing data. The method consists of two major steps. In the initialization stage, the original hyperspectral images are partitioned into overlapping patches. Then, for each patch, the spectral and spatial information as well as the label are extracted. A small set of patches is randomly selected from the data set for annotation, then a patch-based support vector machine (PTSVM) classifier is initially trained with these patches. In the second stage (close-loop stage of query and retraining), the trained PTSVM classifier is combined with one of three query methods, which are margin sampling (MS), entropy query-by-bagging (EQB), and multi-class level uncertainty (MCLU), and is subsequently employed to query the most informative samples from the candidate pool comprising the rest of the patches from the data set. The query selection cycle enables the PTSVM model to select the most informative queries for human annotation. Then, these informative queries are added to the training set. This process runs iteratively until a stopping criterion is met. Finally, the trained PTSVM is employed to patch classification. In order to compare this to pixel-based active learning (PXAL) models, the prediction label of a patch by PTSVM is transformed into a pixel-wise label of a pixel predictor to get the classification maps. Experimental results show better performance of the proposed PTAL methods on classification accuracy and computational time on three different hyperspectral data sets as compared with PXAL methods.
    International Journal of Remote Sensing 03/2014; 35(5-5):1846-1875. DOI:10.1080/01431161.2013.879349 · 1.65 Impact Factor
  • Changsheng Li · Qingshan Liu · Weishan Dong · Xiaobin Zhu · Jing Liu · Hanqing Lu ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a novel feature selection-based method for facial age estimation. The face aging is a typical temporal process, and facial images should have certain ordinal patterns in the aging feature space. From the geometrical perspective, a facial image can be usually seen as sampled from a low-dimensional manifold embedded in the original high-dimensional feature space. Thus, we first measure the energy of each feature in preserving the underlying local structure information and the ordinal information of the facial images, respectively, and then we intend to learn a low-dimensional aging representation that can maximally preserve both kinds of information. To further improve the performance, we try to eliminate the redundant local information and ordinal information as much as possible by minimizing nonlinear correlation and rank correlation among features. Finally, we formulate all these issues into a unified optimization problem, which is similar to linear discriminant analysis in format. Since it is expensive to collect the labeled facial aging images in practice, we extend the proposed supervised method to a semi-supervised learning mode including the semi-supervised feature selection method and the semi-supervised age prediction algorithm. Extensive experiments are conducted on the FACES dataset, the Images of Groups dataset, and the {FG-NET} aging dataset to show the power of the proposed algorithms, compared to the state-of-the-arts.
    Cybernetics, IEEE Transactions on 01/2014; 45(11):1-1. DOI:10.1109/TCYB.2014.2376517 · 3.47 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we present a new framework to monitor medication intake for elderly individuals by incorporating a video camera and Radio Frequency Identification (RFID) sensors. The proposed framework can provide a key function for monitoring activities of daily living (ADLs) of elderly people at their own home. In an assistive environment, RFID tags are applied on medicine bottles located in a medicine cabinet so that each medicine bottle will have a unique ID. The description of the medicine data for each tag is manually input to a database. RFID readers will detect if any of these bottles are taken away from the medicine cabinet and identify the tag attached on the medicine bottle. A video camera is installed to continue monitoring the activity of taking medicine by integrating face detection and tracking, mouth detection, background subtraction, and activity detection. The preliminary results demonstrate that 100% detection accuracy for identifying medicine bottles and promising results for monitoring activity of taking medicine.
    07/2013; 2(2):61-70. DOI:10.1007/s13721-013-0025-y

Publication Stats

2k Citations
94.51 Total Impact Points


  • 2015
    • Nanjing University of Science and Technology
      Nan-ching, Jiangsu Sheng, China
  • 2011-2015
    • Nanjing University of Information Science & Technology
      Nan-ching, Jiangsu, China
  • 2007-2011
    • Rutgers, The State University of New Jersey
      • • Department of Computer Science
      • • Center for Computational Biomedicine Imaging and Modeling (CBIM)
      Нью-Брансуик, New Jersey, United States
  • 2002-2009
    • Chinese Academy of Sciences
      • • Institute of Automation
      • • National Pattern Recognition Laboratory
      Peping, Beijing, China
  • 2005-2008
    • The Chinese University of Hong Kong
      • Department of Information Engineering
      Hong Kong, Hong Kong