Article

Foreground segmentation in atmospheric turbulence degraded video sequences to aid in background stabilization

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Video sequences captured over a long range through the turbulent atmosphere contain some degree of atmospheric turbulence degradation (ATD). Stabilization of the geometric distortions present in video sequences containing ATD and containing objects undergoing real motion is a challenging task. This is due to the difficulty of discriminating which part of visible motion is real motion and which part is caused by ATD warping. Due to this, most stabilization techniques applied to ATD sequences distort real motion in the sequence. We propose a method to classify foreground regions in ATD video sequences. This classification is used to stabilize the background of the scene while preserving objects undergoing real motion by compositing them back into the sequence. A hand-annotated dataset of three ATD sequences is produced with which the performance of this approach can be quantitatively measured and compared against the current state of the art.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Where, F denotes the given series of frames ( 1,… , ) acquired from a stationary camera, B, O, E [17] represents pixel matrices for background, object and error or turbulence respectively. This decomposition is mainly corresponding to the inherent properties of the components. ...
... The background scene is presumed to be static, hence it consists of linearly correlated elements that are a part of a low rank matrix [18]. The turbulence component [17] can be captured by the Forbenius norm attained from a matrix by piling the columns. The moving object components [19] are best taken by limiting the non-zero records desirable for finding outliers. ...
... For instance, in [27], the authors propose to use an adaptive thresholding technique to distinguish clusters of turbulence movement vs. object movement. The same type of approach is also used in [28,29]. Experimentally, we can observe that this idea does not work well when the velocity magnitudes of both types of movement are of the same level. ...
Preprint
Full-text available
In this paper, we investigate how moving objects can be detected when images are impacted by atmospheric turbulence. We present a geometric spatio-temporal point of view to the problem and show that it is possible to distinguish movement due to the turbulence vs. moving objects. To perform this task, we propose an extension of 2D cartoon+texture decomposition algorithms to 3D vector fields. Our algorithm is based on curvelet spaces which permit to better characterize the movement flow geometry. We present experiments on real data which illustrate the efficiency of the proposed method.
... Some techniques do not fall cleanly into either category. For instance, [29][30][31] all combine both motion-field and pixel-intensity methods. The use of multiple cameras and a nonconventional camera are explored in [32,33], respectively. ...
Article
Full-text available
In long-range imaging applications, anisoplanatic atmospheric optical turbulence imparts spatially- and temporally varying blur and geometric distortions in acquired imagery. The ability to distinguish true scene motion from turbulence warping is important for many image-processing and analysis tasks. The authors present a scene-motion detection algorithm specifically designed to operate in the presence of anisoplanatic optical turbulence. The method models intensity fluctuations in each pixel with a Gaussian mixture model (GMM). The GMM uses knowledge of the turbulence tilt-variance statistics. We provide both quantitative and qualitative performance analyses and compare the proposed method to several state-of-the art algorithms. The image data are generated with an anisoplanatic numerical wave-propagation simulator that allows us to have motion truth. The subject technique outperforms the benchmark methods in our study.
... 9,10 In the past decades, several di®erent long-distance imaging systems have been developed both working in visible as well as infrared spectral bands. 11 Atmospheric turbulence is a classical problem in astronomical observations which become non negligible, but quiet recently it has attracted video surveillance 12 and security researchers as well. Additionally, it can be broadly utilized in the¯eld of Computer vision 13 to identify the target objects through algorithms without any human intervention, Remote sensing 14 to detect and identify the planets and satellites. ...
Article
A long range observing systems can be sturdily affected by scintillations. These scintillations are caused by changes in atmospheric conditions. In recent years, various turbulence mitigation approaches for turbulence mitigation have been exhibiting a promising nature. In this paper, we propose an effectual method to alleviate the effects of atmospheric distortion on observed images and video sequences. These sequences are mainly affected through floating air turbulence which can severely degrade the image quality. The existing algorithms primarily focus on the removal of turbulence and provides a solution only for static scenes, where there is no moving entity (real motion). As in the traditional SGL algorithm, the updated frame is iteratively used to correct the turbulence. This approach reduces the turbulence effect. However, it imposes some artifacts on the real motion that blurs the object. The proposed method is an alteration of the existing Sobolev Gradient and Laplacian (SGL) algorithm to eliminate turbulence. It eliminates the ghost artifact formed on moving object in the existing approach. The proposed method alleviates turbulence without harming the moving objects in the scene. The method is demonstrated on significantly distorted sequences provided by OTIS and compared with the SGL technique. The information conveyed in the scene becomes clearly visible through the method on exclusion of turbulence. The proposed approach is evaluated using standard performance measures such as MSE, PSNR and SSIM. The evaluation results depict that the proposed method outperforms the existing state-of-the-art approaches for all three standard performance measures.
... It represents the process of finding abrupt and significant changes in the gray level image intensity, and can be used in a variety of computer vision applications (e.g. segmentation [1][2][3][4][5][6], depth map compression [7], medical imaging [8]). Due to its importance, a lot of research has discussed this subject [9][10][11][12][13][14] and different comparative studies have been carried out [15][16][17]. ...
Article
Full-text available
The detection of object edges in images is a crucial step employed in a vast amount of computer vision applications, for which a series of different algorithms has been developed in the last decades. This paper proposes a new edge detection method based on quantum information, which is achieved in two main steps: (i) an image enhancement stage that employs the quantum superposition law and (ii) an edge detection stage based on the probability of photon arrival to the camera sensor. The proposed method has been tested on synthetic and real images devoted to agriculture applications, where Fram & Deutsh criterion has been adopted to evaluate its performance. The results show that the proposed method gives better results in terms of detection quality and computation time compared to classical edge detection algorithms such as Sobel, Kayyali, Canny and a more recent algorithm based on Shannon entropy.
... For instance, in [27], the authors propose to use an adaptive thresholding technique to distinguish clusters of turbulence movement vs. object movement. The same type of approach is also used in [28,29]. Experimentally, we can observe that this idea does not work well when the velocity magnitudes of both types of movement are of the same level. ...
Article
Full-text available
In this paper, we investigate how moving objects can be detected when images are impacted by atmospheric turbulence. We present a geometric spatio-temporal point of view to the problem and show that it is possible to distinguish movement due to the turbulence vs. moving objects. To perform this task, we propose an extension of 2D cartoon+texture decomposition algorithms to 3D vector fields. Our algorithm is based on curvelet spaces which permit to better characterize the movement flow geometry. We present experiments on real data which illustrate the efficiency of the proposed method.
ResearchGate has not been able to resolve any references for this publication.