- Access to this full-text is provided by Hindawi.
- Learn more
Download available
Content available from Journal of Spectroscopy
This content is subject to copyright. Terms and conditions apply.
Research Article
Hazardous Gas Emission Monitoring Based on
High-Resolution Images
Wenjian Chen,
1
Yi Wang,
2,3
Xuan Li,
2
Wei Gao,
1
Shiwei Ma,
1
Yuanyuan Duan,
1
and Xiaopeng Shao
2
1
Xi’an Institute of Applied Optics, Xi’an 710065, China
2
School of Physics and Optoelectronic Engineering, Xidian University, Xi’an 710071, China
3
Chinese Flight Test Establishment, Xi’an 710089, China
Correspondence should be addressed to Xiaopeng Shao; xpshao@xidian.edu.cn
Received 28 September 2017; Accepted 21 November 2017; Published 4 March 2018
Academic Editor: Yufei Ma
Copyright © 2018 Wenjian Chen et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Air pollution presents unprecedentedly severe challenges to humans today. Various measures have been taken to monitor pollution
from gas emissions and the changing atmosphere, of which imaging is of crucial importance. By images of target scenes, intuitional
judgments and in-depth data are achievable. However, due to the limitations of imaging devices, effective and efficient monitoring
work is often hindered by low-resolution target images. To deal with this problem, a superresolution reconstruction method was
proposed in this study for high-resolution monitoring images. It was based on the idea of sparse representation. Particularly,
multiple dictionary pairs were trained according to the gradient features of samples, and one optimal pair of dictionaries was
chosen to reconstruct by judging the weighting of the information in different directions. Furthermore, the K-means singular
value decomposition algorithm was used to train the dictionaries and the orthogonal matching pursuit algorithm was employed
to calculate the sparse coding coefficients. Finally, the experiment’s results demonstrated its advantages in both visual fidelity
and numerical measures.
1. Introduction
Today, humans are facing the most severe situation of air
pollution due to industrial emission, fuel combustion, motor
vehicle exhaust emission, and so on [1–4]. It adversely
affects human health and daily life. Various monitoring
measures have been taken to provide real-time information
of the environment [5]. For example, the monitoring process
focuses on the number of chimneys that are working and the
size of the emissions. As an important method, imaging
technology has made the monitoring task easy and efficient,
although it often suffers from low-resolution problems orig-
inating from the imaging devices. Details are not available in
the captured images, which further limits the judgment of
the conditions [6–8].
To improve the resolution of monitoring images, charge-
coupled devices (CCDs) with high resolution are exploited,
which will certainly lead to a higher cost. To improve the
effect of monitoring images without replacing the original
detection equipment, people gradually turn to image-
processing methods, of which the superresolution reconstruc-
tion method is an effective way to improve the quality of
images including hyperspectral, visible light, and infrared
images. Many significant superresolution (SR) methods have
been proposed to deal with SR reconstruction problems in past
decades [9–12]. Among them, the conventional interpolation-
restoration approach is computational. One improved SR
algorithm is the generalized nonlocal means algorithm,
which generalized the successful video denoising nonlocal
means (NLM) technique [13, 14].
In addition, in the study of improving image resolution,
single-frame image SR reconstruction algorithms based on
dictionary learning show their unique advantage in combin-
ing the prior information of images in this research area
Hindawi
Journal of Spectroscopy
Volume 2018, Article ID 2698025, 7 pages
https://doi.org/10.1155/2018/2698025
[15, 16], so these reconstruct images with higher resolution
than other methods. Inspired by compressed sensing (CS)
theory, Yang et al. firstly proposed the SR reconstruction
method based on sparse coding [17, 18], named sparse
coding-based superresolution (SCSR), which could recon-
struct a HR image by global dictionary learning. Zeyde et al.
made an improvement to Yang et al.’s method and proposed
the single image scale-up method using sparse representation
(SISR) [19]. The SCSR algorithm proposed by Yang et al. and
the SISR method proposed by Zeyde et al. are both effective
learning-based SR reconstruction methods, which try to
obtain a HR image by training a high-resolution dictionary
and a corresponding low-resolution (LR) dictionary.
When reconstructing the LR images of different kinds,
the mentioned methods choose the only couple of the dictio-
nary. But the information in different gas observation images
or different parts in one image may vary a lot [20]. If we sep-
arate a whole LR image into many patches and then choose
the dictionary couple with the highest similarity according
to the features in the LR patches in the process of recon-
structing HR patches, the resolution of the reconstructed
monitoring image could be improved by a large margin.
Hence, a novel SR reconstruction based on adaptive dictio-
nary learning of gradient features is proposed in this paper
according to the analyses above. In particular, the features
of training samples are clustered into several groups by the
K-means method, and multidictionary pairs are trained off-
line by the K-means singular value decomposition (K-SVD)
algorithm [21]. And then, the dictionary pair which has
the highest similarity with the LR image is selected in the
process of online image reconstruction. Finally, several
groups of experiments are completed and the results show
that the images reconstructed by the proposed method are
excellent on both subjective vision perception and objective
evaluation value.
2. Principle of SR Image Reconstruction
Dictionary learning is particularly important in the process
of image SR reconstruction. By means of sparse representa-
tion, more information in the dictionaries can be contained
by fewer atoms, which leads to the higher quality of recon-
structed images and less time cost. In this section, the calcu-
lations of image sparse representation and dictionary pair
learning are described in detail.
In the process of imaging, if the original ideal HR
image is ̂x,which is affected by the blurring of the optical
imaging systems and the downsampling of the CCD when
displayed on the device, the traditional imaging process can
be modeled as [22]
x=SĤx+v,1
where xis the observed image, Srepresents the downsam-
pling operator, His a blurring filter, and vis the additive
noise caused by the system.
Based on the theory of CS, to obtain an ideal HR image
using a fast SR reconstruction algorithm, both the methods
of SISR and SCSR require that the ideal HR image ̂xbe
sparsely represented by the HR dictionary Dh,which can be
written as [23]
̂x=Dhα2
The observed LR image can be expressed by
x=SHDhα, α0≪n3
As (2) and (3) show, α∈Ris a sparse matrix with nele-
ments. The sparse representation of the HR ideal image is
shown in Figure 1. But this equation is not satisfied by many
images. The natural image statistics prove that usually the
areas of image primitives have very low internal dimensions
and thus they can be represented by a small amount of train-
ing samples [24]. The primitives mentioned here refer to the
high-frequency feature areas such as the edge and inflection
points in images.
In this study, an assumption was made that the image
primitives can be represented in the sparse form by the dic-
tionary which is trained with a large number of primitive
patches. A HR image can be obtained by the following steps,
and Figure 2 shows the process of SR reconstruction.
(i) The primitive areas of training samples are extracted
and divided into four subsets according to the direc-
tion of their gradient features. The four directions
are 0
°
,45
°
,90
°
, and 135
°
, which indicate the angles
with respect to the vertical direction.
(ii) Divide the primitive areas into four subsets accord-
ing to the gradient features, which are used as the
data source to train four subdictionary sets, each
including a HR dictionary and a corresponding LR
dictionary by the K-SVD algorithm.
(iii) According to the weightings of the gradient feature
information in each direction within a LR patch,
the certain HR subdictionary and the corresponding
LR subdictionary are chosen to reconstruct a HR
subpatch.
(iv) The HR subpatches reconstructed by different
dictionaries are combined into the final HR image
according to the weight of the features in four
directions.
Typically, the primitive areas in the images can be
obtained by (linear) feature extraction operators. For the
training sample sets x,the HR primitive areas in one direc-
tion xk
hand the corresponding LR primitive areas xk
lcan be
obtained by
xk
h=Fkx,
xk
l=FkSx,
4
where k=1,2,3,4and Sis the downsampling operator. The
features in the horizontal gradient (0
°
) can be obtained after
being filtered by the operator F1The features in the vertical
gradient (90
°
) can be obtained after being filtered by F2The
2 Journal of Spectroscopy
features in the 45
°
and 135
°
directions can be obtained by F3
and F4,respectively. The four filters are described in
F1=
−101
−101
−101
,
F2=
111
000
−1−1−1
,
F3=
011
−101
−1−10
,
F4=
11 0
10−1
0−1−1
5
The HR and LR primitive areas obtained in (3) and (4)
can be sparely represented by the corresponding HR and
LR dictionaries, respectively. The process of obtaining the
dictionaries can be mathematically expressed by
Dk
h= argmin
Dk
h,α
xk
h−Dk
hα2
2, s t ∀i,αi0<T0,
Dk
l= argmin
Dk
l,α
xk
l−Dk
lα2
2, s t ∀i,αi0<T0
6
Here, Dk
hrefers to the kth subdictionary of HR and Dk
l
refers to the corresponding LR dictionary. Because the HR
patches and the LR patches have the same sparse representa-
tion coefficients, (6) is expressed as
min
Dk,α
xk−Dkα2
2, s t ∀i,αi0<T0, 7
where
x=
xh
N
xl
M
,
D=
Dh
N
Dl
M
8
The Nand Min the equation above mean the size of the
input image. Given an initial dictionary D0,the dictionary
pair Dk
hand Dk
lcan be solved by the K-SVD algorithm [21].
Figure 3 shows some parts of the four HR feature patches
with the same training samples.
For a LR image yto be reconstructed, at first, a sim-
ple segmentation method was employed to divide it into
ppatches yi∈Rn,n=1,2,…,pFor a certain LR patch yi,
its corresponding HR patch is ̂yi(̂yi∈Rn,n=1,2,…,p).
According to (1), the LR patches can be written as yi=S
Ĥyi+v,where vrefers to the noise. Our purpose is to get
̂yk
iby yiand a certain couple of subdictionary Dk
land Dk
h;
the process can be expressed as
βk
i= argmin
βk
i
yi−SHDk
lβk
i
2
2, s t βk
i≤T0,9
̂yk
i=Dk
hβk
i10
The symbol βk
iin the above equations means the sparse
representation coefficient of yiunder a certain subdictionary
Dk
lEquation (9)can be solved by the orthogonal matching
pursuit (OMP) method [25] and by calculating βk
i.Inthe
process of SR reconstructing the HR patch ̂yi,we need to
combine the weight of the information in four different
directions. The HR patch ̂yican be written as
̂yi=〠
4
j=1
qj
̂yj
i, 11
where qjrepresents the weighting coefficient of the informa-
tion in four different directions. To guarantee the compatibil-
ity between the LR patches and the chosen dictionary, the
weighting coefficient is employed which can be expressed
by the amount of the information in different directions,
which is
qj=Fjyi
∑4
k=1 Fkyi
, j=1,2,3,…,4 12
Derived from (9), the final HR image ̂yiis
̂yi=〠
4
j=1
Fjyi
∑4
k=1 Fkyi
Dj
hβj
i13
××
×
Dh
Dh
SH
ˆ
Figure 1: Example of sparse representation.
3Journal of Spectroscopy
The whole HR image ̂yis finally spliced by all the HR
patches ̂yi,as shown in
̂y=〠
p
i=1
̂yi14
3. Experiment and Analysis
According to the analysis above, experiments have been
conducted to demonstrate the effectiveness of the proposed
method for superresolution image reconstruction. The
number of training samples will affect the quality of the
Training samples
LR image
Selection
Combination
HR image
Four
subdictionaries
K-SVD
Four subsets
Feature
extraction
Figure 2: Process of SR reconstruction.
(a) (b)
(c) (d)
Figure 3: Partial elements of the four different HR feature patches: (a) 0
°
, (b) 90
°
, (c) 45
°
, and (d) 135
°
.
4 Journal of Spectroscopy
reconstructed HR image, but too many samples lead to a
decrease in the reconstruction efficiency. In our experiments,
the peak signal-to-noise ratio (PSNR) and the mean square
error (MSE) were chosen as the objective evaluation param-
eters of image quality, which compare the original high-
resolution image with each corresponding pixel of the
reconstructed image. Because they can reflect the similarity
of each pixel of two images, PSNR and MSE are widely used
objective evaluation parameters of image quality in the field
of image processing. Above all, PSNR and MSE reflect the
degree of similarity or fidelity of two images. PSNR and
MSE can be expressed as
MSE = 1
M×N〠
M
i=1
〠
N
i=1
xi−xi
2,15
PSNR = 10lg MAXI2
MSE 16
The symbols Mand Nin (15) represent the width and
height of the image, respectively. xiand ximean the pixel
values corresponding to two images. The result of xi−xiis
close to zero when these two images are similar. In other
words, a small MSE value indicates two analogous images.
In addition, the parameter MAXIin (16) means the value
of the signal peak. For example, the value equals 255, when
we use an 8-bit sample image. Furthermore, MSE is an
important parameter for PSNR. A better reconstruction
image always has a higher value of PSNR.
In our experiments, the growth of the value of PSNR of
the reconstructed HR image is close to zero when the sample
number increases to a degree. In order to guarantee the
comprehensiveness of the dictionary information and the
efficiency of the algorithm, 100 pictures of different kinds
were chosen as the training samples. For the LR color images,
we transform the RGB images to a YCbCr space. Since the
human visual system is more sensitive to the luminance
channel, we use the superresolution reconstruction method
based on dictionary learning proposed in this study to recon-
struct the images in the luminance channel, and the images in
the Cb and Cr chrominance channels are simply magnified
by the bicubic interpolation method.
The original LR images in these experiments are all
downsampled from HR images by a factor of 1/3 in this
experiment. The number of atoms in each subdictionary is
1000, and the iteration number of the dictionary training is
set to 40. For a better comparison, the LR images are also
reconstructed by the bicubic interpolation method and Zeyde
et al.’s method, respectively. Figures 4 and 5 show the results
of reconstructed gas monitoring images.
Figure 4(a) shows the original LR chimney image, and
Figures 4(b)–4(d) are the reconstructed images based on
the bicubic method, Zeyde et al.’s method, and the proposed
method, respectively. For the enlarged parts of these images,
the construction of the chimney in Figures 4(b)–4(d) is more
clear and smooth than that in Figure 4(a). The image in
Figure 4(d) is smoother and shows better quality including
better contrast, improved resolution, clearer texture, and
detailed information.
In addition, Figure 5 takes an image of a scene full of
smog as a sample. With the reconstructed HR images, it is
more practical to determine the exact pollution sources and
their locations. As for the detailed information in the images,
all of the three reconstructed HR images show a better
human visual effect than the LR image, which shows obvious
mosaic blocks. Furthermore, the HR image reconstructed by
(a) (b)
(c) (d)
Figure 4: LR chimney image and reconstructed HR image by different methods: (a) LR image, (b) reconstructed image by the bicubic method,
(c) reconstructed image by Zeyde et al.’s method, and (d) reconstructed image by the proposed method.
5Journal of Spectroscopy
the bicubic method looks blurred compared with the other
two HR images. As for the texture, stripe information and
other details in Figure 5(d) are more abundant and clearer.
In general, because four different dictionaries contain dif-
ferent features of the training samples and show different
similarities with the LR images, the reconstructed images by
the proposed SR algorithm of adaptive dictionary learning
all present a better image quality than raw LR images do.
While the HR reconstruction image will be obtained from
the LR image, the HR patches are utilized based on the
weight of the features in different directions, which consider
the differences in various LR patches and guarantee the
best matching results with the dictionaries in the process
of SR reconstruction.
To evaluate the reconstruction results, the objective eval-
uation parameters PSNR and MSE are analyzed and calcu-
lated for reconstructed images in both Figures 4 and 5. The
results are shown in Table 1. Indeed, PSNR and MSE are
two basic evaluation standards. Higher PSNR and lower
MSE mean better image quality. As Table 1 shows, the PSNR
of two groups of images reconstructed by the proposed
method are both higher than the PSNR of images recon-
structed by the other two methods while the value of MSE
are significantly reduced. Compared with the images recon-
structed by Zeyde et al.’s method, the PSNR of the recon-
structed chimney image by the proposed method increases
by 0.8602 dB and the PSNR of the reconstructed smog image
by the proposed method increases by 0.3369 dB. Different LR
images will end with different reconstruction results due to
the association with the image content. The images with
more obvious gradient changes are more possible to be better
reconstructed, benefitting from the higher matching accu-
racy with the various gradient dictionaries.
4. Conclusion
In conclusion, the proposed superresolution reconstruction
method based on adaptive dictionary learning of gradient
features can be used to obtain HR monitoring images of gas
emission with more detailed information. It starts by training
four couples of different dictionaries according to the direc-
tions of gradient information and reconstructing a HR gas
monitoring image by combining the HR patches according
to the weight of the information in different directions.
Finally, actual images of pollution sources are tested by the
proposed method. Experimental results show the effective-
ness of the proposed method in reconstructing HR images.
Furthermore, our further study will focus on increasing
the efficiency of the whole SR reconstruction procedure
and improving the dictionaries for some particular images
with pertinence to make the monitoring task more effective
and efficient.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
(a) (b)
(c) (d)
Figure 5: LR smog image and reconstructed HR image by different methods: (a) LR image, (b) reconstructed image by the bicubic method, (c)
reconstructed image by Zeyde et al.’s method, and (d) reconstructed image by the proposed method.
Table 1: Objective evaluation of the reconstructed images in
Figures 4 and 5.
Image Bicubic Zeyde Proposed
Chimney PSNR 32.9080 34.2736 35.1338
MSE 33.2874 24.3062 19.9390
Smog PSNR 37.0425 37.1426 37.4795
MSE 12.8477 12.5363 11.6178
6 Journal of Spectroscopy
Acknowledgments
Grateful acknowledgement is made to the authors’friends
Dr. Fei Liu and Ms. Pingli Han, who gave them considerable
help by means of suggestion, comments, and criticism.
Thanks are due to Dr. Fei Liu for his encouragement and
support. Thanks are also due to Ms. Pingli Han for the guid-
ance of the paper. The authors deeply appreciate the contri-
bution made to this thesis in various ways by their leaders
and colleagues. This work was supported by the National
Natural Science Foundation of China (no. 61505156); the
61st General Program of China Postdoctoral Science Foun-
dation (2017M613063); Fundamental Research Funds for
the Central Universities (JB170503); the State Key Labora-
tory of Applied Optics (CS16017050001); and the Young
Scientists Fund of the National Natural Science Foundation
of China (61705175).
References
[1] N. Middleton, P. Yiallouros, S. Kleanthous et al., “A 10-year
time-series analysis of respiratory and cardiovascular morbid-
ity in Nicosia, Cyprus: the effect of short-term changes in
air pollution and dust storms,”Environmental Health, vol. 7,
no. 1, pp. 7–39, 2008.
[2] S. Michaelides, D. Paronis, A. Retalis, and F. Tymvios,
“Monitoring and forecasting air pollution levels by exploiting
satellite, ground-based, and synoptic data, elaborated with
regression models,”Advances in Meteorology, vol. 2017,
Article ID 2954010, 17 pages, 2017.
[3] C. A. Pope III and D. W. Dockery, “Health effects of fine
particulate air pollution: lines that connect,”Journal of the Air
& Waste Management Association, vol. 56, no. 6, pp. 709–742,
2006.
[4] X. Y. Zhang, Y. Q. Wang, T. Niu et al., “Atmospheric aerosol
compositions in China: spatial/temporal variability, chemical
signature, regional haze distribution and comparisons with
global aerosols,”Atmospheric Chemistry and Physics, vol. 12,
no. 2, pp. 779–799, 2012.
[5] R. Idoughi, T. H. G. Vidal, P.-Y. Foucher, M.-A. Gagnon,
and X. Briottet, “Background radiance estimation for gas
plume quantification for airborne hyperspectral thermal imag-
ing,”Journal of Spectroscopy, vol. 2016, Article ID 4616050,
4 pages, 2016.
[6] M. Schlerf, G. Rock, P. Lagueux et al., “A hyperspectral ther-
mal infrared imaging instrument for natural resources applica-
tions,”Remote Sensing, vol. 4, no. 12, pp. 3995–4009, 2012.
[7] J. A. Hackwell, D. W. Warren, R. P. Bongiovi et al., “LWIR/
MWIR imaging hyperspectral sensor for airborne and
ground-based remote sensing,”in SPIE's 1996 International
Symposium on Optical Science, Engineering, and Instrumenta-
tion International Society for Optics and Photonics, vol. 2819,
pp. 102–107, 1996.
[8] Y. Ferrec, S. Thétas, J. Primot et al., “Sieleters, an airbone imag-
ing static Fourier transform spectrometer: design and prelimi-
nary laboratory results,”in Fourier transform Spectroscopy, 2013.
[9] D. Glasner, S. Bagon, and M. Irani, “Super-resolution from a
single image,”in 2009 IEEE 12th International Conference on
Computer Vision, vol. 30no. 2, pp. 349–356, Kyoto, Japan, 2009.
[10] S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Advances
and challenges in super-resolution,”International Journal of
Imaging Systems and Technology, vol. 14, no. 2, pp. 47–57,
2004.
[11] S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and
robust multiframe super resolution,”IEEE Transactions on
Image Processing, vol. 13, no. 10, pp. 1327–1344, 2004.
[12] D. M. Robinson, C. A. Toth, J. Y. Lo, and S. Farsiu, “Efficient
Fourier-wavelet super-resolution,”IEEE Transactions on Image
Processing, vol. 19, no. 10, pp. 2669–2681, 2010.
[13] S. P. Kim, N. K. Rose, and H. M. Valenzuela, “Recursive recon-
struction of high resolution image from noisy undersampled
multiframes,”IEEE Transactions on Acoustics, Speech, and
Signal Processing, vol. 38, no. 6, pp. 1013–1027, 1990.
[14] R. Hardie, “A fast image super-resolution algorithm using an
adaptive Wiener Filter,”IEEE Transactions on Image Process-
ing, vol. 16, no. 12, pp. 2953–2964, 2007.
[15] K. I. Kim and Y. Kwon, “Single-image super-resolution using
sparse regression and natural image prior,”IEEE Transactions
on Pattern Analysis and Machine Intelligence, vol. 32, no. 6,
pp. 1127–1133, 2010.
[16] W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based
super-resolution,”IEEE Computer Graphics and Applications,
vol. 22, no. 2, pp. 56–65, 2002.
[17] J. C. Yang, J. Wright, T. Huang, and Y. Ma, “Image super-
resolution as sparse representation of raw image patches,”in
2008 IEEE Conference on Computer Vision and Pattern Recog-
nition, pp. 1–8, Anchorage, AK, USA, 23-28 June 2008.
[18] J. C. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-
resolution via sparse representation,”IEEE Transactions on
Image Processing, vol. 19, no. 11, pp. 2861–2873, 2010.
[19] R. Zeyde, M. Elad, and M. Protter, “On single image scale-up
using sparse-representations,”in LNCS 6920: Proceedings of
the 7th International Conference on Curves and Surfaces,
pp. 711–730, 2010.
[20] K. B. Zhang, X. B. Gao, D. C. Tao, and X. Li, “Multi-scale
dictionary for single image super-resolution,”in 2012 IEEE
Conference on Computer Vision and Pattern Recognition,
pp. 1114–1121, Providence, RI, USA, 16-21 June 2012.
[21] M. Aharon, M. Elad, and A. Bruckstein, “rmK-SVD: an
algorithm for designing overcomplete dictionaries for sparse
representation,”IEEE Transactions on Signal Processing,
vol. 54, no. 11, pp. 4311–4322, 2006.
[22] S. Baker and T. Kanade, “Limits on super-resolution and how
to break them,”IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 24, no. 9, pp. 1167–1183, 2002.
[23] G. P. Mittu, M. Vivek, and P. Joonki, “Imaging inverse problem
using sparse representation with adaptive dictionary learning,”
in 2015 IEEE International Advance Computing Conference
(IACC), pp. 1247–1251, Banglore, India, 12-13 June 2015.
[24] Q. G. Liu, S. S. Wang, L. Ying, X. Peng, Y. Zhu, and D. Liang,
“Adaptive dictionary learning in sparse gradient domain for
image recovery,”IEEE Transactions on Image Processing,
vol. 22, no. 12, pp. 4652–4663, 2013.
[25] Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, “Orthogonal
matching pursuit: recursive function approximation with
applications to wavelet decomposition,”in Proceedings of
27th Asilomar Conference on Signals, Systems and Computers,
vol. 1, pp. 40–44, Pacific Grove, CA, USA, 1-3 Nov. 1993.
7Journal of Spectroscopy
Available via license: CC BY 4.0
Content may be subject to copyright.