Fig 1 - uploaded by Xin Ning
Content may be subject to copyright.
Atmospheric scattering model.  

Atmospheric scattering model.  

Source publication
Article
Full-text available
In this letter, a novel and highly efficient haze removal algorithm is proposed for haze removal from only a single input image. The proposed algorithm is built on the atmospheric scattering model. Firstly, global atmospheric light is estimated and coarse atmospheric veil is inferred based on statistics of dark channel prior. Secondly, the coarser...

Context in source publication

Context 1
... shown in Fig. 1, the atmospheric scattering model, which is usually used to describe the influence of bad weather conditions on the image, is an effective expressing form in the field of computer vision [10]. The atmospheric scattering model can be described as: ...

Similar publications

Chapter
Full-text available
This chapter describes a novel method to enhance degraded nighttime images by dehazing and color correction method. In the first part of this chapter, the authors focus on filtering process for low illumination images. Secondly, they propose an efficient dehazing model for removing haziness Thirdly, a color correction method proposed for color cons...

Citations

... This task has attracted considerable of attention over the years, and restorers tend to use the most appropriate way to restore images to their original state while ensuring the most desirable artistic effect [8,9]. High-quality image inpainting has a wide range of application areas, such as target removal, scratch removal, watermark removal, and inpainting of old photos [10][11][12][13]. ...
Article
Full-text available
(1) Background: In the future Internet era, clarity and structural rationality are important factors in image inpainting. Currently, image inpainting techniques based on generative adversarial networks have made great progress; however, in practical applications, there are still problems of unreasonable or blurred inpainting results for high-resolution images and images with complex structures. (2) Methods: In this work, we designed a lightweight multi-level feature aggregation network that extracts features from convolutions with different dilation rates, enabling the network to obtain more feature information and recover more reasonable missing image content. Fast Fourier convolution was designed and used in the generative network, enabling the generator to consider the global context at a shallow level, making it easier to perform high-resolution image inpainting tasks. (3) Results: The experiment shows that the method designed in this paper performs well in geometrically complex and high-resolution image inpainting tasks, providing a more reasonable and clearer inpainting image. Compared with the most advanced image inpainting methods, our method outperforms them in both subjective and objective evaluations. (4) Conclusions: The experimental results indicate that the method proposed in this paper has better clarity and more reasonable structural features.
... Deep learning is widely used in natural language processing [46], image classification [45], image inpainting and other fields. The current research of image inpainting mainly focuses on regular mask [9] and irregular mask [10] inpainting, denoising [11], defogging [12], old photo coloring [13], target removal [14]and other tasks. In recent years, many deep learning-based image inpainting algorithms have been proposed. ...
Article
Full-text available
Ancient Chinese books are of great significance to historical research and cultural inheritance. Unfortunately, many of these books have been damaged and corroded in the process of long-term transmission. The restoration by digital preservation of ancient books is a new method of conservation. Traditional character restoration methods ensure the visual consistency of character images through character features and the pixels around the damaged area. However, reconstructing characters often causes errors, especially when there is large damage in critical locations. Inspired by human’s imitation writing behavior, a two-branch structure character restoration network EA-GAN (Example Attention Generative Adversarial Network) is proposed, which is based on a generative adversarial network and fuses reference examples. By referring to the features of the example character, the damaged character can be restored accurately even when the damaged area is large. EA-GAN first uses two branches to extract the features of the damaged and example characters. Then, the damaged character is restored according to neighborhood information and features of the example character in different scales during the up-sampling stage. To solve problems when the example and damaged character features are not aligned and the convolution receptive field is too small, an Example Attention block is proposed to assist in restoration. Qualitative and quantitative analysis experiments are carried out on a self-built dataset MSACCSD and real scene pictures. Compared with current inpainting networks, EA-GAN can get the correct text structure through the guidance of the additional example in the Example Attention block. The peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) value increased by 9.82% and 1.82% respectively. The learned perceptual image patch similarity (LPIPS) value calculated by Visual Geometry Group (VGG) network and AlexNet decreased by 35.04% and 16.36% respectively. Our method obtained better results than the current inpainting methods. It also has a good restoration effect in the face of untrained characters, which is helpful for the digital preservation of ancient Chinese books.
... mage inpainting is the process of completing or recovering the missing region in the image or removing some objects added to it [1]- [7]. With the increasing interest in image inpainting, many approaches have emerged, which can be broadly classified into three categories: Diffusion-based methods, patch-based methods, and deep learning-based methods. ...
Preprint
Full-text available
In recent years, some researchers focused on using a single image to obtain a large number of samples through multi-scale features. This study intends to a brand-new idea that requires only ten or even fewer samples to construct the low-rank structural-Hankel matrices-assisted score-based generative model (SHGM) for color image inpainting task. During the prior learning process, a certain amount of internal-middle patches are firstly extracted from several images and then the structural-Hankel matrices are constructed from these patches. To better apply the score-based generative model to learn the internal statistical distribution within patches, the large-scale Hankel matrices are finally folded into the higher dimensional tensors for prior learning. During the iterative inpainting process, SHGM views the inpainting problem as a conditional generation procedure in low-rank environment. As a result, the intermediate restored image is acquired by alternatively performing the stochastic differential equation solver, alternating direction method of multipliers, and data consistency steps. Experimental results demonstrated the remarkable performance and diversity of SHGM.
... Image repair methods based on deep learning At present, the research of image restoration mainly focuses on regular mask [9] and irregular mask [10] repair, denoising [11], defogging [12], old photo coloring [13], target removal [14]and other tasks. In recent years, many image restoration algorithms based on deep learning have been proposed. ...
Preprint
Full-text available
Ancient books are of great significance to historical research and cultural inheritance. Unfortunately, these books have been damaged and corroded in the process of long-term transmission. The restoration and digital preservation of ancient books is a new protection method. Traditional character restoration methods ensure the visual consistency of character images through the character features and the pixels around the damaged hole. However, character structure often causes errors, especially when there is a big hole in critical locations. Inspired by human copywriting behavior, a two-branch structure character restoration network EA-GAN is proposed, which is based on generative adversarial network and fuses reference examples. By referring to the feature of the example character, the damaged character can be repaired accurately even when the damaged area is large. EA-GAN first uses two branches to extract the features of the damaged character and example character respectively. Then, the damaged character is repaired according to their neighborhood information and example character features of different scales in the upsampling stage. To solve the problems that the example features and the fixed character features are not aligned and the convolution receptive field is too small, an Example Attention structure block is proposed to assin ist repair. Qualitative and quantitative analysis experiments are carried out on the self-built dataset MSACCSD and real scene pictures. Compared with the latest repair networks, EA-GAN can get the correct text structure through the guidance of the additional example in the Example Attention block. The peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) value were improved by 9.82% and 1.82%, and the learned perceptual image patch similarity (LPIPS) value calculated by VGG and AlexNet decreased by 35.04% and 16.36% respectively, which obtained better results than the current repair methods. It also has a good repair effect in the face of untrained characters ,which is helpful to the digital preservation of ancient books.
... As for the lightweight network with MobileNet as the backbone, it has ultra-fast speed and extremely small parameters, and can be deployed on mobile devices, but there is a big lack of accuracy, and its lack of accuracy will bring a lot of unnecessary trouble to the identification work. In the improved research of CNN model, the Dual-Task convolutional neural network we designed includes two model structures of image cleaning and tree recognition compared with the ordinary single-task CNN network, and it can extract more representative features based on the correlation between the two tasks [23,24]. It not only improves the accuracy of tree identification and saves labor costs, but also reduces the burden on trainers and promotes the long-term development of tree classification. ...
Article
Full-text available
Aiming at the difficult problem of complex extraction for tree image in the existing complex background, we took tree species as the research object and proposed a fast recognition system solution for tree image based on Caffe platform and Dual-Task Gabor Convolutional Neural Network. In the research of deep learning algorithms based on Caffe framework, the improved Dual-Task CNN model (DCNN) is applied to train the image extractor and classifier to accomplish the dual tasks of image cleaning and tree classification. In addition, when compared with the traditional classification methods represented by Support Vector Machine (SVM) and Single-Task CNN model, Dual-Task CNN model demonstrates its superiority in classification performance. Then, for further improvement to the recognition accuracy for similar species, Gabor kernel was introduced to extract the features of frequency domain for images in different scales and directions, so as to enhance the texture features of leaf images and improve the recognition effect. The improved model was tested on the data sets of similar species. As demonstrated by the results, the improved deep Gabor Dual-Task convolutional neural network (GCNN) is advantageous in tree recognition and similar tree classification when compared with the order Dual-Task CNN classification method. Finally, the recognition results of trees can be displayed on the application graphical interface as well. Dual-Task Gabor CNN can be applied to mobile programs based on Ubantu, Android, IOS and other systems. The deep learning model used to identify tree species images can be deployed on the server side, and mobile devices can read and search for tree species images through network connections.
... SSD outperforms the other two detectors with an accuracy over 74% and we present a comparative analysis of the models. For future study, we intend to develop models for classifying the leaves/ears in real life scenarios, with natural background without any pre-processing, possibly by employing automated haze removal techniques [16]. This could also be done by incorporating object detection model and classification onto a unified system. ...
Article
Full-text available
Automatic identification of plant diseases is critical for agricultural crop protection so as to enhance the crop yield. The recent advances in deep learning and image processing gives hope for the development of efficient algorithms to address this issue. In this manuscript, we make use of these schemes for developing a solution (which we name LDLCNN) for automatic disease identification to enable crop protection. We propose two lightweight Convolutional Neural Network (CNN) models for identifying diseases in the leaves and ears of pearl millets. Although many models exist in the literature, the total number of parameters employed by our model is much lesser, by a factor of thousand as compared to MobileNet v2 which is one of the most popular tool used for such applications. Hence our scheme can be employed and run on directly on devices with much lesser compute power. It is noteworthy that, while using such few parameters, the proposed model achieves an accuracy of 97.9% in detecting the existence of the downy mildew disease in pearl millets. To eliminate most of the pre-processing steps and make our system suitable for real-time applications, we explore various object detectors to instantiate multiple instances of healthy and diseased leaf and ear in an image such as YOLO3, SSD and RetinaNet. Amongst these object detectors, we note that SSD outperforms the other two models with an accuracy of 74% and we present a comparative analysis of the models.
... There are two mainstream dehazing categories. One are based on traditional enhancement algorithms, such as histogram equalization, Retinex algorithm [28,42], contrast enhancement [2,32] and wavelet transform [14,16], etc. The other are the image restoration methods based on atmospheric scattering model, estimation of the ambient illumination map and transmission map are the key steps in these categories. ...
Article
Full-text available
Nighttime haze images always suffer from non-uniform illumination from artificial light sources, and most of the current dehazing algorithms are more suitable for daytime image haze removal than nighttime. In this paper, we propose a novel method for nighttime image dehazing via gray space. Firstly, we mapped the haze image from RGB color space to gray space and adopted convolutional neural network to obtain the feature distribution map of the haze. We then fused the haze feature distribution map with original image to obtain the initial haze-free image. Finally, the value and chroma of the initial haze-free image were enhanced in HSV space by improved gamma function. Extensive experiments demonstrate that the proposed method outperforms the state-of-the-art algorithms for nighttime image haze removal, especially in terms of color consistency and artifacts reduction.
... At present, there are relatively few related algorithms for sand-dust images, and most of them concentrate on the research of image dehazing algorithms, such as literature [6,13,16,18,20,29,33]. Dehazing algorithms can be divided into three categories. ...
Article
Full-text available
Due to the scattering and absorption of light, the captured image under sand-dust weather has serious color shift and poor visibility, which can affect the application of computer vision. To solve those problems, the present study proposes a color balance and sand-dust image enhancement algorithm in Lab space. To correct the color of the sand-dust image, a color balance technique is put forward. At first, the color balance technique employs the green channel to compensate the lost value of the blue channel. Then, the technique based on statistical strategy is employed to remove the color shift. The proposed color balance technique can effectively remove the color shift while reducing the blue artifact. The brightness component L is decomposed by guided filtering to obtain the detail component. In the meanwhile, to enhance the detail information of the image, the nonlinear mapping function and gamma function are introduced to the detail component. Experimental results based on qualitative and quantitative evaluation demonstrate that the proposed method can effectively remove color shift, enhance details and contrast of the image and produce results superior to those of other state-of-the-art methods. Additionally, the proposed algorithm can satisfy real-time applications, which can also be used to restore images of turbid underwater and haze images.
... Besides, it also proposes the problems of current research and determines the research idea and framework. The second section is a literature review, which summarizes and analyzes the current research on high-resolution satellite image building recognition and extraction and proposes the advantages of deep neural networks [13][14][15]. The third section introduces the research methodology, including the convolutional neural network (CNN) applications, model structure, data source, and experimental performance verification. ...
Article
Full-text available
Extracting and recognizing buildings from high-resolution remote sensing images faces many problems due to the complexity of the buildings on the surface. The purpose is to improve the recognition and extraction capabilities of remote sensing satellite images. The Gao Fen-2 (GF-2) high-resolution remote sensing satellite is taken as the research object. The deep convolutional neural network (CNN) serves as the core of image feature extraction, and PCA (principal component analysis) is adopted to reduce the dimensionality of the data. A correction neural network model, that is, boundary regulated network (BR-Net) is proposed. The features of remote sensing images are extracted through convolution, pooling, and classification. Different data collection models are utilized for comparative analysis to verify the performance of the proposed model. Results demonstrate that when using CNN to recognize remote sensing images, the recognition accuracy is much higher than that of traditional image recognition models, which can reach 95.3%. Compared with the newly researched models, the performance is improved by 15%, and the recognition speed is increased by 20%. When extracting buildings with higher accuracy, the proposed model can also ensure clear boundaries, thereby obtaining a complete building image. Therefore, using deep learning technology to identify and extract buildings from high-resolution satellite remote sensing images is of great significance for advancing the deep learning applications in image recognition.
... Therefore, we present a discriminative DCNN model to solve this recognition problem, which uses a 9-layer convolutional structure to extract features of chest X-ray, and adopts center loss to improve the discriminativeness of the model [6]. More specifically, this model learns a center for each class of deep features [11]. Straightforwardly, the SoftMax loss keeps the various category of depth features segregated. ...