Article

Online Boosting for Vehicle Detection

Dept. of Electr. Eng., Nat. Taipei Univ. of Technol., Taipei, Taiwan
IEEE Transactions on Systems Man and Cybernetics Part B (Cybernetics) (Impact Factor: 3.24). 07/2010; DOI:10.1109/TSMCB.2009.2032527
Source: IEEE Xplore

ABSTRACT This paper presents a real-time vision-based vehicle detection system employing an online boosting algorithm. It is an online AdaBoost approach for a cascade of strong classifiers instead of a single strong classifier. Most existing cascades of classifiers must be trained offline and cannot effectively be updated when online tuning is required. The idea is to develop a cascade of strong classifiers for vehicle detection that is capable of being online trained in response to changing traffic environments. To make the online algorithm tractable, the proposed system must efficiently tune parameters based on incoming images and up-to-date performance of each weak classifier. The proposed online boosting method can improve system adaptability and accuracy to deal with novel types of vehicles and unfamiliar environments, whereas existing offline methods rely much more on extensive training processes to reach comparable results and cannot further be updated online. Our approach has been successfully validated in real traffic environments by performing experiments with an onboard charge-coupled-device camera in a roadway vehicle.

0 0
 · 
0 Bookmarks
 · 
112 Views
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: We present a robust method for vehicle categorization in aerial images. This approach relies on a multiple-classifier system that merges the answers of classifiers applied at various camera angle incidences. The single classifiers are built by matching 3D-templates to the vehicle silhouettes with a local projection model that is compatible with the assumption of the little knowledge that we have of the viewing-condition parameters. We assess the validity of our approach on a challenging dataset of images captured in real-world conditions.
    Image and Signal Processing and Analysis (ISPA), 2011 7th International Symposium on; 10/2011
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: We propose a viewpoint-independent object-detec- tion algorithm that detects objects in videos based on their 2-D and 3-D information. Object-specific quasi-3-D templates are proposed and applied to match objects' 2-D contours and to calculate their 3-D sizes. A quasi-3-D template is the contour and the 3-D bound- ing cube of an object viewed from a certain panning and tilting angle. Pedestrian templates amounting to 2660 and 1995 vehicle templates encompassing 19 tilting and 35 panning angles are used in this study. To detect objects, we first match the 2-D contours of object candidates with known objects' contours, and some object templates with large 2-D contour-matching scores are identified. In this step, we exploit some prior knowledge on the viewpoint on which the object is viewed to speed up the template matching, and the viewpoint likelihood for each contour-matched template is also assigned. Then, we calculate the 3-D widths, heights, and lengths of the contour-matched candidates, as well as the corresponding 3-D-size-matching scores. The overall matching score is obtained by combining the aforementioned likelihood and scores. The ma- jor contributions of this paper are to explore the joint use of 2-D and 3-D features in object detection. It shows that, by considering 2-D contours and 3-D sizes, one can achieve promising object detection rates. The proposed algorithms were evaluated on both pedestrian and vehicle sequences. It yielded significantly better detection results than the best results reported in PETS 2009, showing that our algorithm outperformed the state-of-the-art pedestrian-detection algorithms. Index Terms—Object detection, pedestrian detection, vehicle detection.
    IEEE Transactions on Intelligent Transportation Systems 01/2011; 12:1599-1608. · 3.06 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: In this paper, we present an intelligent vision-based on-road preceding vehicle detection and tracking system based on computer vision techniques. Pre-processing video stabilization is adopted to improve system reliability and stability. High performance detection is achieved via the machine learning-based method. Our framework is favored for various automotive applications, which yields above 90% detection rate in long range and 99.1% tracking successful rate in middle range.
    Consumer Electronics (ICCE), 2011 IEEE International Conference on; 02/2011

Full-text

View
1 Download
Available from

Wen-Chung Chang