Online Boosting for Vehicle Detection

Dept. of Electr. Eng., Nat. Taipei Univ. of Technol., Taipei, Taiwan
IEEE TRANSACTIONS ON CYBERNETICS (Impact Factor: 6.22). 07/2010; 40(3):892 - 902. DOI: 10.1109/TSMCB.2009.2032527
Source: IEEE Xplore

ABSTRACT This paper presents a real-time vision-based vehicle detection system employing an online boosting algorithm. It is an online AdaBoost approach for a cascade of strong classifiers instead of a single strong classifier. Most existing cascades of classifiers must be trained offline and cannot effectively be updated when online tuning is required. The idea is to develop a cascade of strong classifiers for vehicle detection that is capable of being online trained in response to changing traffic environments. To make the online algorithm tractable, the proposed system must efficiently tune parameters based on incoming images and up-to-date performance of each weak classifier. The proposed online boosting method can improve system adaptability and accuracy to deal with novel types of vehicles and unfamiliar environments, whereas existing offline methods rely much more on extensive training processes to reach comparable results and cannot further be updated online. Our approach has been successfully validated in real traffic environments by performing experiments with an onboard charge-coupled-device camera in a roadway vehicle.

37 Reads
  • Source
    • "Besides, the front obstacle detection was discussed enthusiastically in the past decade. Online boosting algorithm is proposed to detect the vehicle in front of the host car [2]. The online learning algorithm can conquer the online tuning problem for a practical system. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents an effective vehicle and motorcycle detection system in the blind spot area in the daytime and nighttime scenes. The proposed method identifies vehicle and motorcycle by detecting the shadow and the edge features in the daytime, and the vehicle and motorcycle could be detected through locating the headlights at nighttime. First, shadow segmentation is performed to briefly locate the position of the vehicle. Then, the vertical and horizontal edges are utilized to verify the existence of the vehicle. After that, tracking procedure is operated to track the same vehicle in the consecutive frames. Finally, the driving behavior is judged by the trajectory. Second, the lamps in the nighttime are extracted based on automatic histogram thresholding, and are verified by spatial and temporal features to against the reflection of the pavement. The proposed real-time vision-based Blind Spot Safety-Assistance System has implemented and evaluated on a TI DM6437 platform to perform the vehicle detection on real highway, expressways, and urban roadways, and works well on sunny, cloudy, and rainy conditions in daytime and night time. Experimental results demonstrate that the proposed vehicle detection approach is effective and feasible in various environments.
    International Journal of Vehicular Technology 01/2012; 2012(1687-5702). DOI:10.1155/2012/506235
  • Source
    • "Several 2-D features (i.e., derived from image intensities) were proposed for vehicle and pedestrian detection, e.g., Haarlike features [10]–[12] and histograms of oriented gradients [13], [14]. The contours of objects are more robust to changes in appearance, owing to different illumination conditions or colors of vehicles or pedestrians. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a viewpoint-independent object-detec- tion algorithm that detects objects in videos based on their 2-D and 3-D information. Object-specific quasi-3-D templates are proposed and applied to match objects' 2-D contours and to calculate their 3-D sizes. A quasi-3-D template is the contour and the 3-D bound- ing cube of an object viewed from a certain panning and tilting angle. Pedestrian templates amounting to 2660 and 1995 vehicle templates encompassing 19 tilting and 35 panning angles are used in this study. To detect objects, we first match the 2-D contours of object candidates with known objects' contours, and some object templates with large 2-D contour-matching scores are identified. In this step, we exploit some prior knowledge on the viewpoint on which the object is viewed to speed up the template matching, and the viewpoint likelihood for each contour-matched template is also assigned. Then, we calculate the 3-D widths, heights, and lengths of the contour-matched candidates, as well as the corresponding 3-D-size-matching scores. The overall matching score is obtained by combining the aforementioned likelihood and scores. The ma- jor contributions of this paper are to explore the joint use of 2-D and 3-D features in object detection. It shows that, by considering 2-D contours and 3-D sizes, one can achieve promising object detection rates. The proposed algorithms were evaluated on both pedestrian and vehicle sequences. It yielded significantly better detection results than the best results reported in PETS 2009, showing that our algorithm outperformed the state-of-the-art pedestrian-detection algorithms. Index Terms—Object detection, pedestrian detection, vehicle detection.
    IEEE Transactions on Intelligent Transportation Systems 12/2011; 12:1599-1608. DOI:10.1109/TITS.2011.2166260 · 2.38 Impact Factor
  • Source
    • "In the context of aerial imagery, boosting has been used to learn how to detect vehicles seen from above in the nadir direction in urban areas [1]. A similar framework has also been used to model vehicle side-views [2], [3], but such discriminative models lack the precision needed for categorization, for which models that embed geometric relationships between the features are more satisfactory. Implicit Shape Models describe object patches by referencing to a visual codebook and estimate the distribution of the patch locations in the recognition framework [4], but different detectors are needed for different 2D aspects of a vehicle. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a robust method for vehicle categorization in aerial images. This approach relies on a multiple-classifier system that merges the answers of classifiers applied at various camera angle incidences. The single classifiers are built by matching 3D-templates to the vehicle silhouettes with a local projection model that is compatible with the assumption of the little knowledge that we have of the viewing-condition parameters. We assess the validity of our approach on a challenging dataset of images captured in real-world conditions.
    Image and Signal Processing and Analysis (ISPA), 2011 7th International Symposium on; 10/2011
Show more


37 Reads
Available from