Conference Paper

Fire and smoke detection in video with optimal mass transport based optical flow and neural networks.

DOI: 10.1109/ICIP.2010.5652119 Conference: Proceedings of the International Conference on Image Processing, ICIP 2010, September 26-29, Hong Kong, China
Source: DBLP

ABSTRACT Detection of fire and smoke in video is of practical and theoretical interest. In this paper, we propose the use of optimal mass transport (OMT) optical flow as a low-dimensional descriptor of these complex processes. The detection process is posed as a supervised Bayesian classification problem with spatio-temporal neighborhoods of pixels;feature vectors are composed of OMT velocities and R,G,B color channels. The classifier is implemented as a single-hidden-layer neural network. Sample results show probability of pixels belonging to fire or smoke. In particular, the classifier successfully distinguishes between smoke and similarly colored white wall, as well as fire from a similarly colored background.

0 Followers
 · 
237 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a new Video Fire Detection (VFD) system for surveillance applications in fire and security industries. The system consists of three modules: pixel-level processing to identify potential fire blobs, blob-based spatial-temporal feature extraction, and a Support Vector Machine (SVM) classifier. The proposed novel spatial-temporal features include a spatial-temporal structural feature and a spatial-temporal contour dynamics feature. The spatial-temporal structural features are extracted from an accumulated motion mask (AMM) and an accumulated intensity template (AIT), capturing the concentric ring structure of fire intensity. The spatial-temporal dynamics features are based on the Fourier descriptor of contours in space and time, capturing the dynamic properties of fire. These global blob-based features are more robust and effective in rejecting false alarms and nuisance sources than pixel-wise features. In addition, extraction of the spatial-temporal features is very efficient, and no tracking of blobs or contours is needed. We also present a new multi-spectrum fire video database for algorithm testing. We evaluate the effectiveness of the proposed features on fire detection on the video database and obtain very promising results.
    Applications of Computer Vision (WACV), 2013 IEEE Workshop on; 01/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In the state-of-the-art video-based smoke detection methods, the representation of smoke mainly depends on the visual information in the current image frame. In the case of light smoke, the original background can be still seen and may deteriorate the characterization of smoke. The core idea of this paper is to demonstrate the superiority of using smoke component for smoke detection. In order to obtain smoke component, a blended image model is constructed, which basically is a linear combination of background and smoke components. Smoke opacity which represents a weighting of the smoke component is also defined. Based on this model, an optimization problem is posed. An algorithm is devised to solve for smoke opacity and smoke component, given an input image and the background. The resulting smoke opacity and smoke component are then used to perform the smoke detection task. The experimental results on both synthesized and real image data verify the effectiveness of the proposed method.
    Multimedia and Expo (ICME), 2012 IEEE International Conference on; 01/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: Video surveillance systems are often used to detect anomalies: rare events which demand a human response, such as a fire breaking out. Automated detection algorithms enable vastly more video data to be processed than would be possible otherwise. This note presents a video analytics framework for the detection of amorphous and unstructured anomalies such as fire, targets in deep turbulence, or objects behind a smoke-screen. Our approach uses an off-line supervised training phase together with an on-line Bayesian procedure: we form a prior, compute a likelihood function, and then update the posterior estimate. The prior consists of candidate image-regions generated by a weak classifier. Likelihood of a candidate region containing an object of interest at each time step is computed from the photometric observations coupled with an optimal-mass-transport optical-flow field. The posterior is sequentially updated by tracking image regions over time and space using active contours thus extracting samples from a properly aligned batch of images. The general theory is applied to the video-fire-detection problem with excellent detection performance across substantially varying scenarios which are not used for training.
    18th IEEE International Conference on Image Processing, ICIP 2011, Brussels, Belgium, September 11-14, 2011; 01/2011