Hazardous Gas Emission Monitoring Based on
and Xiaopeng Shao
Xi’an Institute of Applied Optics, Xi’an 710065, China
School of Physics and Optoelectronic Engineering, Xidian University, Xi’an 710071, China
Chinese Flight Test Establishment, Xi’an 710089, China
Correspondence should be addressed to Xiaopeng Shao; email@example.com
Received 28 September 2017; Accepted 21 November 2017; Published 4 March 2018
Academic Editor: Yufei Ma
Copyright © 2018 Wenjian Chen et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Air pollution presents unprecedentedly severe challenges to humans today. Various measures have been taken to monitor pollution
from gas emissions and the changing atmosphere, of which imaging is of crucial importance. By images of target scenes, intuitional
judgments and in-depth data are achievable. However, due to the limitations of imaging devices, eﬀective and eﬃcient monitoring
work is often hindered by low-resolution target images. To deal with this problem, a superresolution reconstruction method was
proposed in this study for high-resolution monitoring images. It was based on the idea of sparse representation. Particularly,
multiple dictionary pairs were trained according to the gradient features of samples, and one optimal pair of dictionaries was
chosen to reconstruct by judging the weighting of the information in diﬀerent directions. Furthermore, the K-means singular
value decomposition algorithm was used to train the dictionaries and the orthogonal matching pursuit algorithm was employed
to calculate the sparse coding coeﬃcients. Finally, the experiment’s results demonstrated its advantages in both visual ﬁdelity
and numerical measures.
Today, humans are facing the most severe situation of air
pollution due to industrial emission, fuel combustion, motor
vehicle exhaust emission, and so on [1–4]. It adversely
aﬀects human health and daily life. Various monitoring
measures have been taken to provide real-time information
of the environment . For example, the monitoring process
focuses on the number of chimneys that are working and the
size of the emissions. As an important method, imaging
technology has made the monitoring task easy and eﬃcient,
although it often suﬀers from low-resolution problems orig-
inating from the imaging devices. Details are not available in
the captured images, which further limits the judgment of
the conditions [6–8].
To improve the resolution of monitoring images, charge-
coupled devices (CCDs) with high resolution are exploited,
which will certainly lead to a higher cost. To improve the
eﬀect of monitoring images without replacing the original
detection equipment, people gradually turn to image-
processing methods, of which the superresolution reconstruc-
tion method is an eﬀective way to improve the quality of
images including hyperspectral, visible light, and infrared
images. Many signiﬁcant superresolution (SR) methods have
been proposed to deal with SR reconstruction problems in past
decades [9–12]. Among them, the conventional interpolation-
restoration approach is computational. One improved SR
algorithm is the generalized nonlocal means algorithm,
which generalized the successful video denoising nonlocal
means (NLM) technique [13, 14].
In addition, in the study of improving image resolution,
single-frame image SR reconstruction algorithms based on
dictionary learning show their unique advantage in combin-
ing the prior information of images in this research area
Journal of Spectroscopy
Volume 2018, Article ID 2698025, 7 pages
[15, 16], so these reconstruct images with higher resolution
than other methods. Inspired by compressed sensing (CS)
theory, Yang et al. ﬁrstly proposed the SR reconstruction
method based on sparse coding [17, 18], named sparse
coding-based superresolution (SCSR), which could recon-
struct a HR image by global dictionary learning. Zeyde et al.
made an improvement to Yang et al.’s method and proposed
the single image scale-up method using sparse representation
(SISR) . The SCSR algorithm proposed by Yang et al. and
the SISR method proposed by Zeyde et al. are both eﬀective
learning-based SR reconstruction methods, which try to
obtain a HR image by training a high-resolution dictionary
and a corresponding low-resolution (LR) dictionary.
When reconstructing the LR images of diﬀerent kinds,
the mentioned methods choose the only couple of the dictio-
nary. But the information in diﬀerent gas observation images
or diﬀerent parts in one image may vary a lot . If we sep-
arate a whole LR image into many patches and then choose
the dictionary couple with the highest similarity according
to the features in the LR patches in the process of recon-
structing HR patches, the resolution of the reconstructed
monitoring image could be improved by a large margin.
Hence, a novel SR reconstruction based on adaptive dictio-
nary learning of gradient features is proposed in this paper
according to the analyses above. In particular, the features
of training samples are clustered into several groups by the
K-means method, and multidictionary pairs are trained oﬀ-
line by the K-means singular value decomposition (K-SVD)
algorithm . And then, the dictionary pair which has
the highest similarity with the LR image is selected in the
process of online image reconstruction. Finally, several
groups of experiments are completed and the results show
that the images reconstructed by the proposed method are
excellent on both subjective vision perception and objective
2. Principle of SR Image Reconstruction
Dictionary learning is particularly important in the process
of image SR reconstruction. By means of sparse representa-
tion, more information in the dictionaries can be contained
by fewer atoms, which leads to the higher quality of recon-
structed images and less time cost. In this section, the calcu-
lations of image sparse representation and dictionary pair
learning are described in detail.
In the process of imaging, if the original ideal HR
image is ̂x,which is aﬀected by the blurring of the optical
imaging systems and the downsampling of the CCD when
displayed on the device, the traditional imaging process can
be modeled as 
where xis the observed image, Srepresents the downsam-
pling operator, His a blurring ﬁlter, and vis the additive
noise caused by the system.
Based on the theory of CS, to obtain an ideal HR image
using a fast SR reconstruction algorithm, both the methods
of SISR and SCSR require that the ideal HR image ̂xbe
sparsely represented by the HR dictionary Dh,which can be
written as 
The observed LR image can be expressed by
As (2) and (3) show, α∈Ris a sparse matrix with nele-
ments. The sparse representation of the HR ideal image is
shown in Figure 1. But this equation is not satisﬁed by many
images. The natural image statistics prove that usually the
areas of image primitives have very low internal dimensions
and thus they can be represented by a small amount of train-
ing samples . The primitives mentioned here refer to the
high-frequency feature areas such as the edge and inﬂection
points in images.
In this study, an assumption was made that the image
primitives can be represented in the sparse form by the dic-
tionary which is trained with a large number of primitive
patches. A HR image can be obtained by the following steps,
and Figure 2 shows the process of SR reconstruction.
(i) The primitive areas of training samples are extracted
and divided into four subsets according to the direc-
tion of their gradient features. The four directions
, and 135
, which indicate the angles
with respect to the vertical direction.
(ii) Divide the primitive areas into four subsets accord-
ing to the gradient features, which are used as the
data source to train four subdictionary sets, each
including a HR dictionary and a corresponding LR
dictionary by the K-SVD algorithm.
(iii) According to the weightings of the gradient feature
information in each direction within a LR patch,
the certain HR subdictionary and the corresponding
LR subdictionary are chosen to reconstruct a HR
(iv) The HR subpatches reconstructed by diﬀerent
dictionaries are combined into the ﬁnal HR image
according to the weight of the features in four
Typically, the primitive areas in the images can be
obtained by (linear) feature extraction operators. For the
training sample sets x,the HR primitive areas in one direc-
hand the corresponding LR primitive areas xk
where k=1,2,3,4and Sis the downsampling operator. The
features in the horizontal gradient (0
) can be obtained after
being ﬁltered by the operator F1The features in the vertical
) can be obtained after being ﬁltered by F2The
2 Journal of Spectroscopy
features in the 45
directions can be obtained by F3
and F4,respectively. The four ﬁlters are described in
The HR and LR primitive areas obtained in (3) and (4)
can be sparely represented by the corresponding HR and
LR dictionaries, respectively. The process of obtaining the
dictionaries can be mathematically expressed by
2, s t ∀i,αi0<T0,
2, s t ∀i,αi0<T0
hrefers to the kth subdictionary of HR and Dk
refers to the corresponding LR dictionary. Because the HR
patches and the LR patches have the same sparse representa-
tion coeﬃcients, (6) is expressed as
2, s t ∀i,αi0<T0, 7
The Nand Min the equation above mean the size of the
input image. Given an initial dictionary D0,the dictionary
lcan be solved by the K-SVD algorithm .
Figure 3 shows some parts of the four HR feature patches
with the same training samples.
For a LR image yto be reconstructed, at ﬁrst, a sim-
ple segmentation method was employed to divide it into
ppatches yi∈Rn,n=1,2,…,pFor a certain LR patch yi,
its corresponding HR patch is ̂yi(̂yi∈Rn,n=1,2,…,p).
According to (1), the LR patches can be written as yi=S
Ĥyi+v,where vrefers to the noise. Our purpose is to get
iby yiand a certain couple of subdictionary Dk
the process can be expressed as
2, s t βk
The symbol βk
iin the above equations means the sparse
representation coeﬃcient of yiunder a certain subdictionary
lEquation (9)can be solved by the orthogonal matching
pursuit (OMP) method  and by calculating βk
process of SR reconstructing the HR patch ̂yi,we need to
combine the weight of the information in four diﬀerent
directions. The HR patch ̂yican be written as
where qjrepresents the weighting coeﬃcient of the informa-
tion in four diﬀerent directions. To guarantee the compatibil-
ity between the LR patches and the chosen dictionary, the
weighting coeﬃcient is employed which can be expressed
by the amount of the information in diﬀerent directions,
, j=1,2,3,…,4 12
Derived from (9), the ﬁnal HR image ̂yiis
Figure 1: Example of sparse representation.
3Journal of Spectroscopy
The whole HR image ̂yis ﬁnally spliced by all the HR
patches ̂yi,as shown in
3. Experiment and Analysis
According to the analysis above, experiments have been
conducted to demonstrate the eﬀectiveness of the proposed
method for superresolution image reconstruction. The
number of training samples will aﬀect the quality of the
Figure 2: Process of SR reconstruction.
Figure 3: Partial elements of the four diﬀerent HR feature patches: (a) 0
, (b) 90
, (c) 45
, and (d) 135
4 Journal of Spectroscopy
reconstructed HR image, but too many samples lead to a
decrease in the reconstruction eﬃciency. In our experiments,
the peak signal-to-noise ratio (PSNR) and the mean square
error (MSE) were chosen as the objective evaluation param-
eters of image quality, which compare the original high-
resolution image with each corresponding pixel of the
reconstructed image. Because they can reﬂect the similarity
of each pixel of two images, PSNR and MSE are widely used
objective evaluation parameters of image quality in the ﬁeld
of image processing. Above all, PSNR and MSE reﬂect the
degree of similarity or ﬁdelity of two images. PSNR and
MSE can be expressed as
MSE = 1
PSNR = 10lg MAXI2
The symbols Mand Nin (15) represent the width and
height of the image, respectively. xiand ximean the pixel
values corresponding to two images. The result of xi−xiis
close to zero when these two images are similar. In other
words, a small MSE value indicates two analogous images.
In addition, the parameter MAXIin (16) means the value
of the signal peak. For example, the value equals 255, when
we use an 8-bit sample image. Furthermore, MSE is an
important parameter for PSNR. A better reconstruction
image always has a higher value of PSNR.
In our experiments, the growth of the value of PSNR of
the reconstructed HR image is close to zero when the sample
number increases to a degree. In order to guarantee the
comprehensiveness of the dictionary information and the
eﬃciency of the algorithm, 100 pictures of diﬀerent kinds
were chosen as the training samples. For the LR color images,
we transform the RGB images to a YCbCr space. Since the
human visual system is more sensitive to the luminance
channel, we use the superresolution reconstruction method
based on dictionary learning proposed in this study to recon-
struct the images in the luminance channel, and the images in
the Cb and Cr chrominance channels are simply magniﬁed
by the bicubic interpolation method.
The original LR images in these experiments are all
downsampled from HR images by a factor of 1/3 in this
experiment. The number of atoms in each subdictionary is
1000, and the iteration number of the dictionary training is
set to 40. For a better comparison, the LR images are also
reconstructed by the bicubic interpolation method and Zeyde
et al.’s method, respectively. Figures 4 and 5 show the results
of reconstructed gas monitoring images.
Figure 4(a) shows the original LR chimney image, and
Figures 4(b)–4(d) are the reconstructed images based on
the bicubic method, Zeyde et al.’s method, and the proposed
method, respectively. For the enlarged parts of these images,
the construction of the chimney in Figures 4(b)–4(d) is more
clear and smooth than that in Figure 4(a). The image in
Figure 4(d) is smoother and shows better quality including
better contrast, improved resolution, clearer texture, and
In addition, Figure 5 takes an image of a scene full of
smog as a sample. With the reconstructed HR images, it is
more practical to determine the exact pollution sources and
their locations. As for the detailed information in the images,
all of the three reconstructed HR images show a better
human visual eﬀect than the LR image, which shows obvious
mosaic blocks. Furthermore, the HR image reconstructed by
Figure 4: LR chimney image and reconstructed HR image by diﬀerent methods: (a) LR image, (b) reconstructed image by the bicubic method,
(c) reconstructed image by Zeyde et al.’s method, and (d) reconstructed image by the proposed method.
5Journal of Spectroscopy
the bicubic method looks blurred compared with the other
two HR images. As for the texture, stripe information and
other details in Figure 5(d) are more abundant and clearer.
In general, because four diﬀerent dictionaries contain dif-
ferent features of the training samples and show diﬀerent
similarities with the LR images, the reconstructed images by
the proposed SR algorithm of adaptive dictionary learning
all present a better image quality than raw LR images do.
While the HR reconstruction image will be obtained from
the LR image, the HR patches are utilized based on the
weight of the features in diﬀerent directions, which consider
the diﬀerences in various LR patches and guarantee the
best matching results with the dictionaries in the process
of SR reconstruction.
To evaluate the reconstruction results, the objective eval-
uation parameters PSNR and MSE are analyzed and calcu-
lated for reconstructed images in both Figures 4 and 5. The
results are shown in Table 1. Indeed, PSNR and MSE are
two basic evaluation standards. Higher PSNR and lower
MSE mean better image quality. As Table 1 shows, the PSNR
of two groups of images reconstructed by the proposed
method are both higher than the PSNR of images recon-
structed by the other two methods while the value of MSE
are signiﬁcantly reduced. Compared with the images recon-
structed by Zeyde et al.’s method, the PSNR of the recon-
structed chimney image by the proposed method increases
by 0.8602 dB and the PSNR of the reconstructed smog image
by the proposed method increases by 0.3369 dB. Diﬀerent LR
images will end with diﬀerent reconstruction results due to
the association with the image content. The images with
more obvious gradient changes are more possible to be better
reconstructed, beneﬁtting from the higher matching accu-
racy with the various gradient dictionaries.
In conclusion, the proposed superresolution reconstruction
method based on adaptive dictionary learning of gradient
features can be used to obtain HR monitoring images of gas
emission with more detailed information. It starts by training
four couples of diﬀerent dictionaries according to the direc-
tions of gradient information and reconstructing a HR gas
monitoring image by combining the HR patches according
to the weight of the information in diﬀerent directions.
Finally, actual images of pollution sources are tested by the
proposed method. Experimental results show the eﬀective-
ness of the proposed method in reconstructing HR images.
Furthermore, our further study will focus on increasing
the eﬃciency of the whole SR reconstruction procedure
and improving the dictionaries for some particular images
with pertinence to make the monitoring task more eﬀective
Conflicts of Interest
The authors declare that they have no conﬂicts of interest.
Figure 5: LR smog image and reconstructed HR image by diﬀerent methods: (a) LR image, (b) reconstructed image by the bicubic method, (c)
reconstructed image by Zeyde et al.’s method, and (d) reconstructed image by the proposed method.
Table 1: Objective evaluation of the reconstructed images in
Figures 4 and 5.
Image Bicubic Zeyde Proposed
Chimney PSNR 32.9080 34.2736 35.1338
MSE 33.2874 24.3062 19.9390
Smog PSNR 37.0425 37.1426 37.4795
MSE 12.8477 12.5363 11.6178
6 Journal of Spectroscopy
Grateful acknowledgement is made to the authors’friends
Dr. Fei Liu and Ms. Pingli Han, who gave them considerable
help by means of suggestion, comments, and criticism.
Thanks are due to Dr. Fei Liu for his encouragement and
support. Thanks are also due to Ms. Pingli Han for the guid-
ance of the paper. The authors deeply appreciate the contri-
bution made to this thesis in various ways by their leaders
and colleagues. This work was supported by the National
Natural Science Foundation of China (no. 61505156); the
61st General Program of China Postdoctoral Science Foun-
dation (2017M613063); Fundamental Research Funds for
the Central Universities (JB170503); the State Key Labora-
tory of Applied Optics (CS16017050001); and the Young
Scientists Fund of the National Natural Science Foundation
of China (61705175).
 N. Middleton, P. Yiallouros, S. Kleanthous et al., “A 10-year
time-series analysis of respiratory and cardiovascular morbid-
ity in Nicosia, Cyprus: the eﬀect of short-term changes in
air pollution and dust storms,”Environmental Health, vol. 7,
no. 1, pp. 7–39, 2008.
 S. Michaelides, D. Paronis, A. Retalis, and F. Tymvios,
“Monitoring and forecasting air pollution levels by exploiting
satellite, ground-based, and synoptic data, elaborated with
regression models,”Advances in Meteorology, vol. 2017,
Article ID 2954010, 17 pages, 2017.
 C. A. Pope III and D. W. Dockery, “Health eﬀects of ﬁne
particulate air pollution: lines that connect,”Journal of the Air
& Waste Management Association, vol. 56, no. 6, pp. 709–742,
 X. Y. Zhang, Y. Q. Wang, T. Niu et al., “Atmospheric aerosol
compositions in China: spatial/temporal variability, chemical
signature, regional haze distribution and comparisons with
global aerosols,”Atmospheric Chemistry and Physics, vol. 12,
no. 2, pp. 779–799, 2012.
 R. Idoughi, T. H. G. Vidal, P.-Y. Foucher, M.-A. Gagnon,
and X. Briottet, “Background radiance estimation for gas
plume quantiﬁcation for airborne hyperspectral thermal imag-
ing,”Journal of Spectroscopy, vol. 2016, Article ID 4616050,
4 pages, 2016.
 M. Schlerf, G. Rock, P. Lagueux et al., “A hyperspectral ther-
mal infrared imaging instrument for natural resources applica-
tions,”Remote Sensing, vol. 4, no. 12, pp. 3995–4009, 2012.
 J. A. Hackwell, D. W. Warren, R. P. Bongiovi et al., “LWIR/
MWIR imaging hyperspectral sensor for airborne and
ground-based remote sensing,”in SPIE's 1996 International
Symposium on Optical Science, Engineering, and Instrumenta-
tion International Society for Optics and Photonics, vol. 2819,
pp. 102–107, 1996.
 Y. Ferrec, S. Thétas, J. Primot et al., “Sieleters, an airbone imag-
ing static Fourier transform spectrometer: design and prelimi-
nary laboratory results,”in Fourier transform Spectroscopy, 2013.
 D. Glasner, S. Bagon, and M. Irani, “Super-resolution from a
single image,”in 2009 IEEE 12th International Conference on
Computer Vision, vol. 30no. 2, pp. 349–356, Kyoto, Japan, 2009.
 S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Advances
and challenges in super-resolution,”International Journal of
Imaging Systems and Technology, vol. 14, no. 2, pp. 47–57,
 S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and
robust multiframe super resolution,”IEEE Transactions on
Image Processing, vol. 13, no. 10, pp. 1327–1344, 2004.
 D. M. Robinson, C. A. Toth, J. Y. Lo, and S. Farsiu, “Eﬃcient
Fourier-wavelet super-resolution,”IEEE Transactions on Image
Processing, vol. 19, no. 10, pp. 2669–2681, 2010.
 S. P. Kim, N. K. Rose, and H. M. Valenzuela, “Recursive recon-
struction of high resolution image from noisy undersampled
multiframes,”IEEE Transactions on Acoustics, Speech, and
Signal Processing, vol. 38, no. 6, pp. 1013–1027, 1990.
 R. Hardie, “A fast image super-resolution algorithm using an
adaptive Wiener Filter,”IEEE Transactions on Image Process-
ing, vol. 16, no. 12, pp. 2953–2964, 2007.
 K. I. Kim and Y. Kwon, “Single-image super-resolution using
sparse regression and natural image prior,”IEEE Transactions
on Pattern Analysis and Machine Intelligence, vol. 32, no. 6,
pp. 1127–1133, 2010.
 W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based
super-resolution,”IEEE Computer Graphics and Applications,
vol. 22, no. 2, pp. 56–65, 2002.
 J. C. Yang, J. Wright, T. Huang, and Y. Ma, “Image super-
resolution as sparse representation of raw image patches,”in
2008 IEEE Conference on Computer Vision and Pattern Recog-
nition, pp. 1–8, Anchorage, AK, USA, 23-28 June 2008.
 J. C. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-
resolution via sparse representation,”IEEE Transactions on
Image Processing, vol. 19, no. 11, pp. 2861–2873, 2010.
 R. Zeyde, M. Elad, and M. Protter, “On single image scale-up
using sparse-representations,”in LNCS 6920: Proceedings of
the 7th International Conference on Curves and Surfaces,
pp. 711–730, 2010.
 K. B. Zhang, X. B. Gao, D. C. Tao, and X. Li, “Multi-scale
dictionary for single image super-resolution,”in 2012 IEEE
Conference on Computer Vision and Pattern Recognition,
pp. 1114–1121, Providence, RI, USA, 16-21 June 2012.
 M. Aharon, M. Elad, and A. Bruckstein, “rmK-SVD: an
algorithm for designing overcomplete dictionaries for sparse
representation,”IEEE Transactions on Signal Processing,
vol. 54, no. 11, pp. 4311–4322, 2006.
 S. Baker and T. Kanade, “Limits on super-resolution and how
to break them,”IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 24, no. 9, pp. 1167–1183, 2002.
 G. P. Mittu, M. Vivek, and P. Joonki, “Imaging inverse problem
using sparse representation with adaptive dictionary learning,”
in 2015 IEEE International Advance Computing Conference
(IACC), pp. 1247–1251, Banglore, India, 12-13 June 2015.
 Q. G. Liu, S. S. Wang, L. Ying, X. Peng, Y. Zhu, and D. Liang,
“Adaptive dictionary learning in sparse gradient domain for
image recovery,”IEEE Transactions on Image Processing,
vol. 22, no. 12, pp. 4652–4663, 2013.
 Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, “Orthogonal
matching pursuit: recursive function approximation with
applications to wavelet decomposition,”in Proceedings of
27th Asilomar Conference on Signals, Systems and Computers,
vol. 1, pp. 40–44, Paciﬁc Grove, CA, USA, 1-3 Nov. 1993.
7Journal of Spectroscopy