Object Tracking via Partial Least Squares Analysis
ABSTRACT We propose an object tracking algorithm that learns a set of appearance models for adaptive discriminative object representation. In this paper, object tracking is posed as a binary classification problem in which the correlation of object appearance and class labels from foreground and background is modeled by partial least squares (PLS) analysis, for generating a low-dimensional discriminative feature subspace. As object appearance is temporally correlated and likely to repeat over time, we learn and adapt multiple appearance models with PLS analysis for robust tracking. The proposed algorithm exploits both the ground truth appearance information of the target labeled in the first frame and the image observations obtained online, thereby alleviating the tracking drift problem caused by model update. Experiments on numerous challenging sequences and comparisons to state-of-the-art methods demonstrate favorable performance of the proposed tracking algorithm.
Machine Vision and Applications 10/2014; 25(7):1859-1876. DOI:10.1007/s00138-014-0632-3 · 1.44 Impact Factor
[Show abstract] [Hide abstract]
ABSTRACT: Large-area, high-resolution visual monitoring systems are indispensable in surveillance applications. To construct such systems, high-quality image capture and display devices are required. Whereas high-quality displays have rapidly developed, as exemplified by the announcement of the 85-inch 4K ultrahigh-definition TV by Samsung at the 2013 Consumer Electronics Show (CES), high-resolution surveillance cameras have progressed slowly and remain not widely used compared with displays. In this study, we designed an innovative framework, using a dual-camera system comprising a wide-angle fixed camera and a high-resolution pan-tilt-zoom (PTZ) camera to construct a large-area, multilayered, and high-resolution visual monitoring system that features multiresolution monitoring of moving objects. First, we developed a novel calibration approach to estimate the relationship between the two cameras and calibrate the PTZ camera. The PTZ camera was calibrated based on the consistent property of distinct pan-tilt angle at various zooming factors, accelerating the calibration process without affecting accuracy; this calibration process has not been reported previously. After calibrating the dual-camera system, we used the PTZ camera and synthesized a large-area and high-resolution background image. When foreground targets were detected in the images captured by the wide-angle camera, the PTZ camera was controlled to continuously track the user-selected target. Last, we integrated preconstructed high-resolution background and low-resolution foreground images captured using the wide-angle camera and the high-resolution foreground image captured using the PTZ camera to generate a large-area, multilayered, and high-resolution view of the scene.ACM Transactions on Multimedia Computing Communications and Applications 01/2015; 11(2):1-23. DOI:10.1145/2645862 · 0.90 Impact Factor
[Show abstract] [Hide abstract]
ABSTRACT: Sparse representation has been attracting much more attention in visual tracking. However most sparse representation based trackers only focus on how to model the target appearance and do not consider the learning of sparse representation when the training samples are imprecise, and hence may drift or fail in the challenging scene. In this paper, we present a novel online tracking algorithm. The tracker integrates the online multiple instance learning into the recent sparse representation scheme. For tracking, the integrated sparse representation combining texture, intensity and local spatial information is proposed to model the target. This representation takes both occlusion and appearance change into account. Then, an efficient online learning approach is proposed to select the most distinguishable features to separate the target from the background samples. In addition, the sparse representation is dynamically updated online with respect to the current context. Both qualitative and quantitative evaluations on challenging benchmark video sequences demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods.Journal of Visual Communication and Image Representation 01/2015; 26:231-246. DOI:10.1016/j.jvcir.2014.11.013 · 1.36 Impact Factor