Dacheng Tao

University of Technology Sydney , Sydney, New South Wales, Australia

Are you Dacheng Tao?

Claim your profile

Publications (428)729.31 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: In computer vision and pattern recognition researches, the studied objects are often characterized by multiple feature representations with high dimensionality, thus it is essential to encode that multiview feature into a unified and discriminative embedding that is optimal for a given task. To address this challenge, this paper proposes an ensemble manifold regularized sparse low-rank approximation (EMR-SLRA) algorithm for multiview feature embedding. The EMR-SLRA algorithm is based on the framework of least-squares component analysis, in particular, the low dimensional feature representation and the projection matrix are obtained by the low-rank approximation of the concatenated multiview feature matrix. By considering the complementary property among multiple features, EMR-SLRA simultaneously enforces the ensemble manifold regularization on the output feature embedding. In order to further enhance its robustness against the noise, the group sparsity is introduced into the objective formulation to impose direct noise reduction on the input multiview feature matrix. Since there is no closed-form solution for EMR-SLRA, this paper provides an efficient EMR-SLRA optimization procedure to obtain the output feature embedding. Experiments on the pattern recognition applications confirm the effectiveness of the EMR-SLRA algorithm compare with some other multiview feature dimensionality reduction approaches.
    Pattern Recognition 10/2015; DOI:10.1016/j.patcog.2014.12.016 · 2.58 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.
    IEEE Transactions on Image Processing 10/2015; 24(10):3124-36. DOI:10.1109/TIP.2015.2438553 · 3.11 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Multiple distortion assessment is a big challenge in image quality assessment (IQA). In this paper, a no reference IQA model for multiply-distorted images is proposed. The features, which are sensitive to each distortion type even in the presence of other distortions, are first selected from three kinds of NSS features. An improved Bag-of-Words (BoW) model is then applied to encode the selected features. Lastly, a simple yet effective linear combination is used to map the image features to the quality score. The combination weights are obtained through lasso regression. A series of experiments show that the feature selection strategy and the improved BoW model are effective in improving the accuracy of quality prediction for multiple distortion IQA. Compared with other algorithms, the proposed method delivers the best result for multiple distortion IQA.
    IEEE Signal Processing Letters 10/2015; 22(10):1-1. DOI:10.1109/LSP.2015.2436908 · 1.64 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Local binary patterns (LBP) achieve great success in texture analysis, however they are not robust to noise. The two reasons for such disadvantage of LBP schemes are (1) they encode the texture spatial structure based only on local information which is sensitive to noise and (2) they use exact values as the quantization thresholds, which make the extracted features sensitive to small changes in the input image. In this paper, we propose a noise-robust adaptive hybrid pattern (AHP) for noised texture analysis. In our scheme, two solutions from the perspective of texture description model and quantization algorithm have been developed to reduce the feature׳s noise sensitiveness. First, a hybrid texture description model is proposed. In this model, the global texture spatial structure which is depicted by a global description model is encoded with the primitive microfeature for texture description. Second, we develop an adaptive quantization algorithm in which equal probability quantization is utilized to achieve the maximum partition entropy. Higher noise-tolerance can be obtained with the minimum lost information in the quantization process. The experimental results of texture classification on two texture databases with three different types of noise show that our approach leads significant improvement in noised texture analysis. Furthermore, our scheme achieves state-of-the-art performance in noisy face recognition.
    Pattern Recognition 08/2015; 48(8). DOI:10.1016/j.patcog.2015.01.001 · 2.58 Impact Factor
  • Xun Yang, Meng Wang, Dacheng Tao
    [Show abstract] [Hide abstract]
    ABSTRACT: Object tracking is a fundamental problem in computer vision. Although much progress has been made, object tracking is still a challenging problem as it entails learning an effective model to account for appearance change caused by intrinsic and extrinsic factors. To improve the reliability and effectiveness, this paper presents an approach that explores the combination of graph-based ranking and multiple feature representations for tracking. We construct multiple graph matrices with various types of visual features, and integrate the multiple graphs into a regularization framework to learn a ranking vector. In particular, the approach has exploited temporal consistency by adding a regularization term to constrain the difference between two weight vectors at adjacent frames. An effective iterative optimization scheme is also proposed in this paper. Experimental results on a variety of challenging video sequences show that the proposed algorithm performs favorably against the state-of-the-art visual tracking methods.
    Neurocomputing 07/2015; 159. DOI:10.1016/j.neucom.2015.02.046 · 2.01 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Single image super-resolution (SR) aims to construct a high-resolution (HR) version from a single low-resolution (LR) image. The SR reconstruction is challenging because of the missing details in the given LR image. Thus, it is critical to explore and exploit effective prior knowledge for boosting the reconstruction performance. In this paper, we propose a novel SR method by exploiting both the directional group sparsity of the image gradients and the directional features in similarity weight estimation. The proposed SR approach is based on two observations: 1) most of the sharp edges are oriented in a limited number of directions; 2) an image pixel can be estimated by the weighted averaging of its neighbors. In consideration of these observations, we apply the curvelet transform to extract directional features which are then used for region selection and weight estimation. A combined total variation (CTV) regularizer is presented which assumes that the gradients in natural images has a straightforward group sparsity structure. In addition, a directional non-local means (D-NLM) regularization term takes pixel values and directional information into account to suppress unwanted artifacts. By assembling the designed regularization terms, we solve the SR problem of an energy function with minimal reconstruction error by applying a framework of templates for first-order conic solvers (TFOCS). The thorough quantitative and qualitative results in terms of PSNR, SSIM, IFC, and preference matrix, demonstrate that the proposed approach achieves higher quality SR reconstruction than state-of-the-art algorithms.
    IEEE Transactions on Image Processing 05/2015; 24(9). DOI:10.1109/TIP.2015.2432713 · 3.11 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Recent advances in object detection have led to the development of segmentation by detection approaches that integrate top-down geometric priors for multi-class object segmentation. A key yet under-addressed issue in utilizing topdown cues for the problem of multi-class object segmentation by detection is efficiently generating robust and accurate geometric priors. In this paper, we propose a random geometric prior forest scheme to obtain object-adaptive geometric priors efficiently and robustly. In the scheme, a testing object first searches for training neighbors with similar geometries by using the random geometric prior forest, and then the geometry of the testing object is reconstructed by linearly combining the geometries of its neighbors. Our scheme enjoys several favorable properties when compared with conventional methods. First, it is robust and very fast because its inference does not suffer from bad initializations, poor local minimums or complex optimization. Second, the figure/ground geometries of training samples are utilized in a multi-task manner. Third, our scheme is objectadaptive but does not require the labeling of parts or poselets, and thus, it is quite easy to implement. To demonstrate the effectiveness of the proposed scheme, we integrate the obtained top-down geometric priors with conventional bottom-up color cues in the frame of graph cut. The proposed random geometric prior forest achieves the best segmentation results of all of the methods tested on VOC2010/2012 and is 90 times faster than the current state-of-the-art method.
    IEEE Transactions on Image Processing 05/2015; 24(10). DOI:10.1109/TIP.2015.2432711 · 3.11 Impact Factor
  • Source
    PLoS ONE 05/2015; 10(5):e0124685. DOI:10.1371/journal.pone.0124685 · 3.53 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: A novel level set method (LSM) with the constraint of shape priors is proposed to implement a selective image segmentation. Firstly, the shape priors are aligned by using image moment to deprive the spatial related information. Secondly, the aligned shape priors are projected into the subspace expanded by using locality preserving projection to measure the similarity between the shapes. Finally, a new energy functional is built by combing data-driven and shape-driven energy items to implement a selective image segmentation method. We assess the proposed method and some representative LSMs on the synthetic, medical and natural images, the results suggest that the proposed one is superior to the pure data-driven LSMs and the representative LSMs with shape priors.
    Neurocomputing 05/2015; DOI:10.1016/j.neucom.2014.07.086 · 2.01 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Partial least squares (PLS) regression has achieved desirable performance for modeling the relationship between a set of dependent (response) variables with another set of independent (predictor) variables, especially when the sample size is small relative to the dimension of these variables. In each iteration, PLS finds two latent variables from a set of dependent and independent variables via maximizing the product of three factors: variances of the two latent variables as well as the square of the correlation between these two latent variables. In this paper, we derived the mathematical formulation of the relationship between mean square error (MSE) and these three factors. We find that MSE is not monotonous with the product of the three factors. However, the corresponding optimization problem is difficult to solve if we extract the optimal latent variables directly based on this relationship. To address these problems, a novel multilinear regression model-variance constrained partial least squares (VCPLS) is proposed. In the proposed VCPLS, we find the latent variables via maximizing the product of the variance of latent variable from dependent variables and the square of the correlation between the two latent variables, while constraining the variance of the latent variable from independent variables must be larger than a predetermined threshold. The corresponding optimization problem can be solved computational efficiently, and the latent variables extracted by VCPLS are near-optimal. Compared with classical PLS and it is variants, VCPLS can achieve lower prediction error in the sense of MSE. The experiments are conducted on three near-infrared spectroscopy (NIR) data sets. To demonstrate the applicability of our proposed VCPLS, we also conducted experiments on another data set, which has different characteristics from NIR data. Experimental results verified the superiority of our proposed VCPLS.
    Chemometrics and Intelligent Laboratory Systems 04/2015; 145. DOI:10.1016/j.chemolab.2015.04.014 · 2.38 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Dense feature extraction is becoming increasingly popular in face recognition tasks. Systems based on this approach have demonstrated impressive performance in a range of challenging scenarios [1]. However, improvements in discriminative power come at a computational cost and with a risk of over-fitting. In this paper, we propose a new approach to dense feature extraction for face recognition, which consists of two steps: first, an encoding scheme is devised that compresses high-dimensional dense features into a compact representation by maximizing the intra-user correlation; and second, we develop an adaptive feature matching algorithm for effective classification. This matching method, in contrast to previous methods, constructs and chooses a small subset of training samples for adaptive matching, resulting in further performance gains. Experiments using several challenging face databases, including LFW, Morph Album 2, CUHK Optical-infrared, and FERET, demonstrate that the proposed approach consistently outperforms the current state-of-the-art.
    IEEE Transactions on Image Processing 04/2015; 24(9). DOI:10.1109/TIP.2015.2426413 · 3.11 Impact Factor
  • Yong Luo, Tongliang Liu, Dacheng Tao, Chao Xu
    [Show abstract] [Hide abstract]
    ABSTRACT: There is growing interest in multi-label image classification due to its critical role in web-based image analytics-based applications, such as large-scale image retrieval and browsing. Matrix completion has recently been introduced as a method for transductive (semi-supervised) multi-label classification, and has several distinct advantages, including robustness to missing data and background noise in both feature and label space. However, it is limited by only considering data represented by a single-view feature, which cannot precisely characterize images containing several semantic concepts. To utilize multiple features taken from different views, we have to concatenate the different features as a long vector. But this concatenation is prone to overfitting and often leads to very high time complexity in MC based image classification. Therefore, we propose to weightedly combine the MC outputs of different views, and present the multi-view matrix completion (MVMC) framework for transductive multilabel image classification. To learn the view combination weights effectively, we apply a cross validation strategy on the labeled set. In particular, MVMC splits the labeled set into two parts, and predicts the labels of one part using the known labels of the other part. The predicted labels are then used to learn the view combination coefficients. In the learning process, we adopt the average precision (AP) loss, which is particular suitable for multi-label image classification, since the ranking based criteria are critical for evaluating a multi-label classification system. A least squares loss formulation is also presented for the sake of efficiency, and the robustness of the algorithm based on the AP loss compared with the other losses is investigated. Experimental evaluation on two real world datasets (PASCAL VOC' 07 and MIR Flickr) demonstrate the effectiveness of MVMC for transductive (semi-supervised) multi-label image classification, and show that MVMC can exploit complementary properties of different features and output-consistent labels for improved multi-label image classification.
    IEEE Transactions on Image Processing 04/2015; 24(8). DOI:10.1109/TIP.2015.2421309 · 3.11 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: An auroral substorm is an important geophysical phenomenon that reflects the interaction between the solar wind and the Earth's magnetosphere. Detecting substorms is of practical significance in order to prevent disruption to communication and global positioning systems. However, existing detection methods can be inaccurate or require time-consuming manual analysis and are therefore impractical for large-scale data sets. In this paper, we propose an automatic auroral substorm detection method based on a shape-constrained sparse and low-rank decomposition (SCSLD) framework. Our method automatically detects real substorm onsets in large-scale aurora sequences, which overcomes the limitations of manual detection. To reduce noise interference inherent in current SLD methods, we introduce a shape constraint to force the noise to be assigned to the low-rank part (stationary background), thus ensuring the accuracy of the sparse part (moving object) and improving the performance. Experiments conducted on aurora sequences in solar cycle 23 (1996-2008) show that the proposed SCSLD method achieves good performance for motion analysis of aurora sequences. Moreover, the obtained results are highly consistent with manual analysis, suggesting that the proposed automatic method is useful and effective in practice.
    IEEE transactions on neural networks and learning systems 03/2015; DOI:10.1109/TNNLS.2015.2411613 · 4.37 Impact Factor
  • Lin Zhao, Xinbo Gao, Dacheng Tao, Xuelong Li
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigate the tracking of 2-D human poses in a video stream to determine the spatial configuration of body parts in each frame, but this is not a trivial task because people may wear different kinds of clothing and may move very quickly and unpredictably. The technology of pose estimation is typically applied, but it ignores the temporal context and cannot provide smooth, reliable tracking results. Therefore, we develop a tracking and estimation integrated model (TEIM) to fully exploit temporal information by integrating pose estimation with visual tracking. However, joint parsing of multiple articulated parts over time is difficult, because a full model with edges capturing all pairwise relationships within and between frames is loopy and intractable. In previous models, approximate inference was usually resorted to, but it cannot promise good results and the computational cost is large. We overcome these problems by exploring the idea of divide and conquer, which decomposes the full model into two much simpler tractable submodels. In addition, a novel two-step iteration strategy is proposed to efficiently conquer the joint parsing problem. Algorithmically, we design TEIM very carefully so that: 1) it enables pose estimation and visual tracking to compensate for each other to achieve desirable tracking results; 2) it is able to deal with the problem of tracking loss; and 3) it only needs past information and is capable of tracking online. Experiments are conducted on two public data sets in the wild with ground truth layout annotations, and the experimental results indicate the effectiveness of the proposed TEIM framework.
    IEEE transactions on neural networks and learning systems 03/2015; DOI:10.1109/TNNLS.2015.2411287 · 4.37 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Studies in neuroscience and biological vision have shown that the human retina has strong computational power, and its information representation supports vision tasks on both ventral and dorsal pathways. In this paper, a new local image descriptor, termed Distinctive Efficient Robust Features, or DERF, is derived by modeling the response and distribution properties of the parvocellular-projecting ganglion cells (P-GCs) in the primate retina. DERF features exponential scale distribution, exponential grid structure, and circularly symmetric function Difference of Gaussian (DoG) used as a convolution kernel, all of which are consistent with the characteristics of the ganglion cell array found in neurophysiology, anatomy, and biophysics. In addition, a new explanation for local descriptor design is presented from the perspective of wavelet tight frames. DoG is naturally a wavelet, and the structure of the grid points array in our descriptor is closely related to the spatial sampling of wavelets. The DoG wavelet itself forms a frame, and when we modulate the parameters of our descriptor to make the frame tighter, the performance of the DERF descriptor improves accordingly. This is verified by designing a tight frame DoG (TF-DoG) which leads to much better performance. Extensive experiments conducted in the image matching task on the Multiview Stereo Correspondence Data set demonstrate that DERF outperforms state of the art methods for both hand-crafted and learned descriptors, while remaining robust and being much faster to compute.
    IEEE Transactions on Image Processing 03/2015; 24(8). DOI:10.1109/TIP.2015.2409739 · 3.11 Impact Factor
  • Lin Zhao, Xinbo Gao, Dacheng Tao, Xuelong Li
    [Show abstract] [Hide abstract]
    ABSTRACT: Articulated human pose estimation in unconstrained conditions is a great challenge. We propose a deep structure that represents a human body in different granularity from coarse-to-fine for better detecting parts and describing spatial constrains between different parts. Typical approaches for this problem just utilize a single level structure, which is difficult to capture various body appearances and hard to model high-order part dependencies. In this paper, we build a three layer Markov network to model the body structure that separates the whole body to poselets (combined parts) then to parts representing joints. Parts at different levels are connected through a parent-child relationship to represent high-order spatial relationships. Unlike other multi-layer models, our approach explores more reasonable granularity for part detection and sophisticatedly designs part connections to model body configurations more effectively. Moreover, each part in our model contains different types so as to capture a wide range of pose modes. And our model is a tree structure, which can be trained jointly and favors exact inference. Extensive experimental results on two challenging datasets show the performance of our model improving or being on-par with state-of-the-art approaches.
    Signal Processing 03/2015; 108:36–45. DOI:10.1016/j.sigpro.2014.07.031 · 2.24 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Various sparse-representation-based methods have been proposed to solve tracking problems, and most of them employ least squares (LSs) criteria to learn the sparse representation. In many tracking scenarios, traditional LS-based methods may not perform well owing to the presence of heavy-tailed noise. In this paper, we present a tracking approach using an approximate least absolute deviation (LAD)-based multitask multiview sparse learning method to enjoy robustness of LAD and take advantage of multiple types of visual features, such as intensity, color, and texture. The proposed method is integrated in a particle filter framework, where learning the sparse representation for each view of the single particle is regarded as an individual task. The underlying relationship between tasks across different views and different particles is jointly exploited in a unified robust multitask formulation based on LAD. In addition, to capture the frequently emerging outlier tasks, we decompose the representation matrix to two collaborative components that enable a more robust and accurate approximation. We show that the proposed formulation can be effectively approximated by Nesterov's smoothing method and efficiently solved using the accelerated proximal gradient method. The presented tracker is implemented using four types of features and is tested on numerous synthetic sequences and real-world video sequences, including the CVPR2013 tracking benchmark and ALOV$++$ data set. Both the qualitative and quantitative results demonstrate the superior performance of the proposed approach compared with several state-of-the-art trackers.
    IEEE transactions on neural networks and learning systems 02/2015; DOI:10.1109/TNNLS.2015.2399233 · 4.37 Impact Factor
  • Source
    Changxing Ding, Dacheng Tao
    [Show abstract] [Hide abstract]
    ABSTRACT: The capacity to recognize faces under varied poses is a fundamental human ability that presents a unique challenge for computer vision systems. Compared to frontal face recognition, which has been intensively studied and has gradually matured in the past few decades, pose-invariant face recognition (PIFR) remains a largely unsolved problem. However, PIFR is crucial to realizing the full potential of face recognition for real-world applications, since face recognition is intrinsically a passive biometric technology for recognizing uncooperative subjects. In this paper, we discuss the inherent difficulties in PIFR and present a comprehensive review of established techniques. Existing PIFR methods can be grouped into four categories, i.e., pose-robust feature extraction approaches, multi-view subspace learning approaches, face synthesis approaches, and hybrid approaches. The motivations, strategies, pros/cons, and performance of representative approaches are described and compared. Moreover, promising directions for future research are discussed.
  • Source
    Yuan Gao, Miaojing Shi, Dacheng Tao, Chao Xu
    [Show abstract] [Hide abstract]
    ABSTRACT: The bag-of-visual-words (BoW) model is effective for representing images and videos in many computer vision problems, and achieves promising performance in image retrieval. Nevertheless, the level of retrieval efficiency in a large-scale database is not acceptable for practical usage. Considering that the relevant images in the database of a given query are more likely to be distinctive than ambiguous, this paper defines “database saliency” as the distinctiveness score calculated for every image to measure its overall “saliency” in the database. By taking advantage of database saliency, we propose a saliency-inspired fast image retrieval scheme, S-sim, which significantly improves efficiency while retains state-of-the-art accuracy in image retrieval. There are two stages in S-sim: the bottom-up saliency mechanism computes the database saliency value of each image by hierarchically decomposing a posterior probability into local patches and visual words, the concurrent information of visual words is then bottom-up propagated to estimate the distinctiveness, and the top-down saliency mechanism discriminatively expands the query via a very low-dimensional linear SVM trained on the top-ranked images after initial search, ranking images are then sorted on their distances to the decision boundary as well as the database saliency values. We comprehensively evaluate S-sim on common retrieval benchmarks, e.g., Oxford and Paris datasets. Thorough experiments suggest that, because of the offline database saliency computation and online low-dimensional SVM, our approach significantly speeds up online retrieval and outperforms the state-of-the-art BoW-based image retrieval schemes.
    IEEE Transactions on Multimedia 02/2015; 17(3):359-369. DOI:10.1109/TMM.2015.2389616 · 1.78 Impact Factor

Publication Stats

8k Citations
729.31 Total Impact Points

Institutions

  • 2010–2015
    • University of Technology Sydney 
      • • Centre for Quantum Computation and Intelligent Systems (QCIS)
      • • Faculty of Engineering and Information Technology
      Sydney, New South Wales, Australia
  • 2008–2012
    • Zhejiang University
      • College of Computer Science and Technology
      Hangzhou, Zhejiang Sheng, China
    • Nanyang Technological University
      • School of Computer Engineering
      Tumasik, Singapore
    • Tianjin University
      • Department of Electronic Information Engineering
      T’ien-ching-shih, Tianjin Shi, China
  • 2011
    • Wuhan University
      • State Key Laboratory of Information engineering in Surveying, Mapping and Remote Sensing
      Wuhan, Hubei, China
    • National University of Defense Technology
      • National Key Laboratory of Parallel and Distributed Processing
      Ch’ang-sha-shih, Hunan, China
    • State Key Laboratory Of Transient Optics And Photonics
      Ch’ang-an, Shaanxi, China
    • Xiamen University
      • Department of Computer Science
      Xiamen, Fujian, China
  • 2010–2011
    • Chinese Academy of Sciences
      • Xi'an Institute of Optics and Precision Mechanics
      Peping, Beijing, China
  • 2009–2011
    • Xidian University
      • School of Life Sciences and Technology
      Ch’ang-an, Shaanxi, China
  • 2007–2010
    • The University of Hong Kong
      • Department of Computer Science
      Hong Kong, Hong Kong
  • 2007–2009
    • The Hong Kong Polytechnic University
      • Department of Computing
      Hong Kong, Hong Kong
  • 2005–2009
    • Birkbeck, University of London
      • Department of Computer Science and Information Systems
      Londinium, England, United Kingdom
  • 2006–2007
    • University of London
      Londinium, England, United Kingdom
    • The University of Sheffield
      • Department of Electronic and Electrical Engineering
      Sheffield, ENG, United Kingdom
  • 2004–2005
    • The Chinese University of Hong Kong
      • Department of Information Engineering
      Hong Kong, Hong Kong