Available via license: CC BY 4.0
Content may be subject to copyright.
Citation: Hou, F.; Zhang, Y.; Zhou, Y.;
Zhang, M.; Lv, B.; Wu, J. Review on
Infrared Imaging Technology.
Sustainability 2022,14, 11161. https:
//doi.org/10.3390/su141811161
Academic Editor: Gwanggil Jeon
Received: 2 August 2022
Accepted: 2 September 2022
Published: 6 September 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
sustainability
Review
Review on Infrared Imaging Technology
Fujin Hou 1, Yan Zhang 2, Yong Zhou 3, Mei Zhang 4, *, Bin Lv 4, * and Jianqing Wu 5,6
1Shandong Hi-Speed Construction Management Group Co., Ltd., Jinan 250001, China
2TECH Traffic Engineering Group Co., Ltd., Beijing 100048, China
3Shandong High-Speed Group Co., Ltd., Jinan 250014, China
4School of Traffic and Transportation, Lanzhou Jiaotong University, Lanzhou 730070, China
5School of Qilu Transportation, Shandong University, Jinan 250002, China
6Suzhou Research Institute, Suzhou 215000, China
*
Correspondence: 11210041@stu.lzjtu.edu.cn (M.Z.); jdlbxx@mail.lzjtu.cn (B.L.); Tel.: +86-191-1938-6309 (M.Z.);
+86-139-1910-8171 (B.L.)
Abstract:
The application of infrared camera-related technology is a trending research topic. By
reviewing the development of infrared thermal imagers, this paper introduces several main processing
technologies of infrared thermal imagers, expounds the image nonuniformity correction, noise
removal, and image pseudo color enhancement of infrared thermal imagers, and briefly analyzes
some main algorithms used in image processing. The technologies of blind element detection and
compensation, temperature measurement, target detection, and tracking of infrared thermal imager
are described. By analyzing the main algorithms of infrared temperature measurement, target
detection, and tracking, the advantages and disadvantages of these technologies are put forward.
At the same time, the development of multi/hyperspectral infrared remote sensing technology and
its application are also introduced. The analysis shows that infrared thermal imager processing
technology is widely used in many fields, especially in the direction of autonomous driving, and this
review helps to expand the reader’s research ideas and research methods.
Keywords:
image processing; blind element detection and compensation; infrared thermography
temperature measurement; target detection and tracking; multi/hyperspectral remote
sensing technology
1. Introduction
Infrared is a type of electromagnetic wave. If the temperature of an objects is above the
thermodynamic absolute temperature, infrared radiation occurs. Thermal infrared imaging
usually refers to mid-infrared imaging and far-infrared imaging. Thermal imaging is the use
of infrared detectors and optical imaging objectives, receiving the infrared radiation energy
of the measured target, and reflecting its distribution pattern to the photosensitive element
of the infrared detector; the detector sends information to the electronic components of the
sensor, image processing, thereby obtaining infrared thermal images [
1
]. Infrared thermal
imaging is a non-destructive and non-contact detection technology, which was first applied
in the military field [
2
]. It is divided into refrigeration type and uncooled type infrared
technology. The refrigeration type infrared thermal imager was used in the laboratory
in the early stage because of the relatively large volume of the refrigeration equipment.
The research of the refrigeration type imager is in the aspects of improving the working
temperature, long wave detection, and system integration. Uncooled infrared focal plane
technology belongs to the third generation of infrared detection technology. The detectors
used are mainly focal plane detectors and two-color detectors. The uncooled type is widely
used. Hyperspectral remote sensing is a remote sensing science and technology with high
spectral resolution, and its foundation is spectroscopy. Remote sensing technology is to
accurately receive and record the wavelength changes caused by the interaction between
electromagnetic waves and materials and provide rich ground feature information through
Sustainability 2022,14, 11161. https://doi.org/10.3390/su141811161 https://www.mdpi.com/journal/sustainability
Sustainability 2022,14, 11161 2 of 26
the reflected difference of action. This feature is determined by the macroscopic and
microscopic characteristics of ground features. From the initial remote sensing technology
to the present hyperspectral remote sensing stage, remote sensing technology has entered a
new stage, and is widely used in geological survey [
3
], agriculture [
4
], vegetation remote
sensing [
5
], marine remote sensing [
6
], environmental monitoring [
7
], and other aspects.
However, it has many spectrum segments and data redundancy, so a series of processing
such as dimension reduction and denoising is needed.
With the development of science and technology, many advantages of infrared thermal
imaging and hyperspectral remote sensing technology have been exploited, such as forming
a thermal image by passively receiving radiation from the human body. In the process of
taking thermal images, the human body does not accept the effects of X-ray, ultrasonic, and
electromagnetic waves. This diagnostic method has the advantages of being harmless to
the human body, good concealment, and all-weather work; it is widely used in medical
treatment [
8
], construction [
9
], electric power [
10
], aviation [
11
], transportation [
12
], and
other fields. However, infrared thermal imaging requires a series of processing steps
because of its low image contrast and poor image detail resolution. The purpose of this
review is to summarize the previous research, point out the shortcomings of the research,
and summarize the optimization algorithm based on deep learning and the development
direction of infrared thermal imager, which has great application potential in an advanced
driving assistance system.
2. Infrared Thermal Imagers
Components of an Infrared Thermal Imager
Thermal imaging systems generally have four basic components: the optical system,
the infrared detector, the electronic information processing system, and the display system.
As shown in Figure 1, the function of the optical system is to focus the received infrared rays
onto the photosensitive elements of the infrared detector. The infrared detector converts
infrared radiation into an electrical signal. It is the core component of the thermal imaging
camera. Amplification and processing of electrical signals is carried out by electronic
information processing systems. The display shows the electrical signal as a visible image
on a monitor or LED screen.
Sustainability 2022, 14, x FOR PEER REVIEW 2 of 27
Remote sensing technology is to accurately receive and record the wavelength changes
caused by the interaction between electromagnetic waves and materials and provide rich
ground feature information through the reflected difference of action. This feature is de-
termined by the macroscopic and microscopic characteristics of ground features. From the
initial remote sensing technology to the present hyperspectral remote sensing stage, re-
mote sensing technology has entered a new stage, and is widely used in geological survey
[3], agriculture [4], vegetation remote sensing [5], marine remote sensing [6], environmen-
tal monitoring [7], and other aspects. However, it has many spectrum segments and data
redundancy, so a series of processing such as dimension reduction and denoising is
needed.
With the development of science and technology, many advantages of infrared ther-
mal imaging and hyperspectral remote sensing technology have been exploited, such as
forming a thermal image by passively receiving radiation from the human body. In the
process of taking thermal images, the human body does not accept the effects of X-ray,
ultrasonic, and electromagnetic waves. This diagnostic method has the advantages of be-
ing harmless to the human body, good concealment, and all-weather work; it is widely
used in medical treatment [8], construction [9], electric power [10], aviation [11], transpor-
tation [12], and other fields. However, infrared thermal imaging requires a series of pro-
cessing steps because of its low image contrast and poor image detail resolution. The pur-
pose of this review is to summarize the previous research, point out the shortcomings of
the research, and summarize the optimization algorithm based on deep learning and the
development direction of infrared thermal imager, which has great application potential
in an advanced driving assistance system.
2. Infrared Thermal Imagers
Components of an Infrared Thermal Imager
Thermal imaging systems generally have four basic components: the optical system,
the infrared detector, the electronic information processing system, and the display sys-
tem. As shown in Figure 1, the function of the optical system is to focus the received in-
frared rays onto the photosensitive elements of the infrared detector. The infrared detector
converts infrared radiation into an electrical signal. It is the core component of the thermal
imaging camera. Amplification and processing of electrical signals is carried out by elec-
tronic information processing systems. The display shows the electrical signal as a visible
image on a monitor or LED screen.
Objectives Infrared optical systems Infrared detectors
Detector readout circuit
Image signal
processing and display
Displays
Figure 1. Components of a thermal imaging camera.
Focal plane thermal imaging cameras have a two-dimensional flat detector shape.
They have an electronic scanning function. The target to be measured is simply passed
through a simple objective lens, focusing infrared on the plane of the infrared detector
array, essentially similar to the principle of photography. The imaging principle is shown
in Figure 2 [13].
Figure 1. Components of a thermal imaging camera.
Focal plane thermal imaging cameras have a two-dimensional flat detector shape.
They have an electronic scanning function. The target to be measured is simply passed
through a simple objective lens, focusing infrared on the plane of the infrared detector
array, essentially similar to the principle of photography. The imaging principle is shown
in Figure 2[13].
Sustainability 2022, 14, x FOR PEER REVIEW 3 of 27
Digital Signal
Processing
System
noise
Atmospheric
transport
Scenic
infrared
radiation
Infrared
optical
systems
Focal plane
detectors
Infrared
image
display
Figure 2. Focal plane thermal imaging principle [14].
Focal plane detectors consist of arrays of tens of thousands of sensing elements. The
uniformity of its response rate is good, as well as its size in microns and low power con-
sumption. The resistive microbolometer type is the most technically mature among infra-
red detectors, with the broadest range of applications. As the infrared radiation passes
through the optical lens onto the detection pixel, it causes the temperature of the sensitive
area to rise and the resistance of the thermal film to change. The principle is shown in
Figure 3.
E
R
1
R
2
R
3
R
4
Figure 3. Principle of operation of an uncooled thermal imaging camera [2].
As shown in Figure 3, R1 is the built-in detector, R2 is the working detector, R3 and R4
are standard resistors, and E is the sampled electrical signal. As there is no infrared radi-
ation, the bridge circuit remains balanced and no voltage signal is output at this time.
When there is infrared radiation, the temperature of resistance R2 changes, so the re-
sistance value of R2 also changes. At this time, the circuit is an unbalanced state, and a
voltage difference is generated at both ends of the signal output circuit. Therefore, there
will be output of voltage signal [15].
The performance indicators of infrared thermal imager are pixels, spatial resolution,
temperature resolution, minimum resolution, spectral response, frame rate, detection
recognition, and identification distance. Its main function is to convert the infrared radia-
tion emitted by the measured target into a two-dimensional grayscale or pseudo-color
signal, thus showing the two-dimensional temperature distribution of the target. It can
also be detected at a long distance, with precise guidance, strong detection capability, long
detection distance, and ability to work around the clock in rain and fog or completely
lightless environments.
3. Thermal Imaging Camera Processing Technology
The image collected by the infrared thermal imager is dark, the contrast between the
target image and the background is low, the resolution is low, and the edge is fuzzy. Due
to the limitations of the external environment and the infrared thermal imager’s own ma-
terials, the accuracy of temperature measurement is low. Due to the influence of various
noises, the image collected by the infrared thermal imager needs to be processed to im-
prove the accuracy. The processing technology of infrared thermal imager is to correct,
denoise, and enhance the non-uniformity of infrared image through relevant algorithms,
Figure 2. Focal plane thermal imaging principle [14].
Sustainability 2022,14, 11161 3 of 26
Focal plane detectors consist of arrays of tens of thousands of sensing elements. The
uniformity of its response rate is good, as well as its size in microns and low power
consumption. The resistive microbolometer type is the most technically mature among
infrared detectors, with the broadest range of applications. As the infrared radiation
passes through the optical lens onto the detection pixel, it causes the temperature of the
sensitive area to rise and the resistance of the thermal film to change. The principle is
shown in Figure 3.
Sustainability 2022, 14, x FOR PEER REVIEW 3 of 27
Digital Signal
Processing
System
noise
Atmospheric
transport
Scenic
infrared
radiation
Infrared
optical
systems
Focal plane
detectors
Infrared
image
display
Figure 2. Focal plane thermal imaging principle [14].
Focal plane detectors consist of arrays of tens of thousands of sensing elements. The
uniformity of its response rate is good, as well as its size in microns and low power con-
sumption. The resistive microbolometer type is the most technically mature among infra-
red detectors, with the broadest range of applications. As the infrared radiation passes
through the optical lens onto the detection pixel, it causes the temperature of the sensitive
area to rise and the resistance of the thermal film to change. The principle is shown in
Figure 3.
E
R
1
R
2
R
3
R
4
Figure 3. Principle of operation of an uncooled thermal imaging camera [2].
As shown in Figure 3, R1 is the built-in detector, R2 is the working detector, R3 and R4
are standard resistors, and E is the sampled electrical signal. As there is no infrared radi-
ation, the bridge circuit remains balanced and no voltage signal is output at this time.
When there is infrared radiation, the temperature of resistance R2 changes, so the re-
sistance value of R2 also changes. At this time, the circuit is an unbalanced state, and a
voltage difference is generated at both ends of the signal output circuit. Therefore, there
will be output of voltage signal [15].
The performance indicators of infrared thermal imager are pixels, spatial resolution,
temperature resolution, minimum resolution, spectral response, frame rate, detection
recognition, and identification distance. Its main function is to convert the infrared radia-
tion emitted by the measured target into a two-dimensional grayscale or pseudo-color
signal, thus showing the two-dimensional temperature distribution of the target. It can
also be detected at a long distance, with precise guidance, strong detection capability, long
detection distance, and ability to work around the clock in rain and fog or completely
lightless environments.
3. Thermal Imaging Camera Processing Technology
The image collected by the infrared thermal imager is dark, the contrast between the
target image and the background is low, the resolution is low, and the edge is fuzzy. Due
to the limitations of the external environment and the infrared thermal imager’s own ma-
terials, the accuracy of temperature measurement is low. Due to the influence of various
noises, the image collected by the infrared thermal imager needs to be processed to im-
prove the accuracy. The processing technology of infrared thermal imager is to correct,
denoise, and enhance the non-uniformity of infrared image through relevant algorithms,
Figure 3. Principle of operation of an uncooled thermal imaging camera [2].
As shown in Figure 3, R
1
is the built-in detector, R
2
is the working detector, R
3
and
R
4
are standard resistors, and E is the sampled electrical signal. As there is no infrared
radiation, the bridge circuit remains balanced and no voltage signal is output at this time.
When there is infrared radiation, the temperature of resistance R
2
changes, so the resistance
value of R
2
also changes. At this time, the circuit is an unbalanced state, and a voltage
difference is generated at both ends of the signal output circuit. Therefore, there will be
output of voltage signal [15].
The performance indicators of infrared thermal imager are pixels, spatial resolution,
temperature resolution, minimum resolution, spectral response, frame rate, detection
recognition, and identification distance. Its main function is to convert the infrared radiation
emitted by the measured target into a two-dimensional grayscale or pseudo-color signal,
thus showing the two-dimensional temperature distribution of the target. It can also
be detected at a long distance, with precise guidance, strong detection capability, long
detection distance, and ability to work around the clock in rain and fog or completely
lightless environments.
3. Thermal Imaging Camera Processing Technology
The image collected by the infrared thermal imager is dark, the contrast between
the target image and the background is low, the resolution is low, and the edge is fuzzy.
Due to the limitations of the external environment and the infrared thermal imager’s own
materials, the accuracy of temperature measurement is low. Due to the influence of various
noises, the image collected by the infrared thermal imager needs to be processed to improve
the accuracy. The processing technology of infrared thermal imager is to correct, denoise,
and enhance the non-uniformity of infrared image through relevant algorithms, so as to
improve the temperature measurement accuracy, contrast, resolution, and signal-to-noise
ratio of infrared image.
Sustainability 2022,14, 11161 4 of 26
3.1. Infrared Image Processing Technology
3.1.1. Non-Uniformity Correction for Infrared Images
Under uniform blackbody radiation, the percentage between the standard deviation
of response values of all effective pixels of infrared focal plane detector and the mean value
of response is as shown in Formula 3.1 [16].
NU =1
Vavg s1
M×N−(d+h)
M
∑
i=1
N
∑
j=1
(Vij −Vavg)2
Vavg=1
M×N−(d+h)
M
∑
i=1
N
∑
j=1
Vij
(1)
Equation (1) is the definition of nonuniformity correction. It has good applicability. In
the equation, Mand Nare the row height and column height of the infrared focal plane
detector array, respectively; V
ij
is the response output voltage of pixels in row iand column
jof the infrared focal plane detector; V
avg
is the invalid detection element of the output
voltage of the infrared detector. dand hare the number of dead and overheated probes
in the array process, respectively. It is generally considered that the infrared focal plane
detector is a dead phase element when the response rate is less than 0.1 times the average
pixel response rate. The noise voltage is greater than ten times the average noise voltage. It
is an overheated pixel. In general, V
avg
is used as an index to evaluate and compare the
nonuniformity of the infrared focal plane [17].
The nonuniformity of infrared images is related to the manufacturing materials, tech-
nology, working state of devices, external input, the influence of the optical system, and
so on. The nonuniformity of infrared images is related to the manufacturing materials,
technology, working state of devices, external input, the influence of optical systems, and
so on. Therefore, the nonuniformity correction of image processing can achieve a more
direct effect. The traditional infrared image nonuniformity correction methods commonly
used are calibration-based correction and scene-based correction algorithms. Calibration
methods based on calibration class include one-point correction method, two-point cor-
rection method [
18
], multi-point correction method, and interpolation correction method.
The correction methods based on scene class include time-domain high pass filtering
method, neural network method, Kalman filtering method, and registration-based method,
as shown in Figure 4.
Sustainability 2022, 14, x FOR PEER REVIEW 5 of 27
neural network principle work, a correction model integrating the integration time term
is constructed. The model is trained with the blackbody gray image and the corresponding
integration time as the input, the gray mean value of the blackbody image as the expected
value. The obtained correction network can effectively adapt to the nonuniformity caused
by the change of integration time. Yang [25] proposed an improved strip noise removal
algorithm. Combining spatial domain and transform domain combined with wavelet
transform and moving window matching algorithm, the accuracy of image nonuniformity
correction is improved. Huang et al. [26] proposed an algorithm for selecting the calibra-
tion point of the multipoint method. By taking the residual as the judgment condition for
selecting the calibration point, the calibration point on the focal plane response curve can
be adaptively determined, so that the correction accuracy of the multipoint method has
been significantly improved. Wang et al. [27] proposed a nonuniformity correction
method with variable integration time using pixel-level radiation self-correction technol-
ogy. By establishing the radiation response equation for each pixel in the infrared detector,
the radiation flux map of the scene is estimated, and the radiation flux map is corrected
by using the linear correction model to realize the nonuniformity correction under any
integration time.
Nonuniform sexual
intercourse of
infrared images
Traditional
calibration
algorithms
Deep learning based correction algorithms
Based on
calibrations
Scene based
Two-point correction method
Multi-point calibration method
One-point correction method
Interpolation correction
Time domain high-pass filtering
Kalman filtering
Neural networks
Based on alignment
Figure 4. Study of nonuniformity correction algorithms for infrared images.
1. Nonuniformity correction of infrared image based on two-point calibration [28]
When the gain of the infrared focal plane detector and the component of the DC bias
are inconsistent, multiplication and additive noise are generated. When performing two-
point correction, it is generally believed that each cell of the detector is linear and the
thermal response rate is stable. The infrared thermal imaging system is in an environment
where the ambient temperature changes little, and the external incident infrared energy
is within the calibration temperature range. If the 1/f noise is very small or even negligible,
under this condition, the output expression of the pixel response of the focal plane detec-
tor is:
()
ij ij ij
x uv
φφ
= +
(2
)
In Equation (2), it refers to the gain coefficient of the pixel and the DC bias coefficient
of the pixel, both of which are called stable thermal response rates. In this expression, as
long as the input infrared radiation intensity remains unchanged, the response output of
the detector pixel remains unchanged. Figure 5 is the schematic diagram of the two-point
temperature correction. Where, b is the output of the standard pixel, a on the left is the
output of the uncorrected pixel, and a on the right is the output of the corrected pixel, PL
Figure 4. Study of nonuniformity correction algorithms for infrared images.
Sribner et al. [
19
] proposed a scene-based nonuniformity correction method, which
is realized by an algorithm based on time high pass filter and an algorithm based on an
Sustainability 2022,14, 11161 5 of 26
artificial neural network. This algorithm can effectively eliminate spatial noise and is more
efficient than traditional algorithms. Qian et al. [
20
] proposed a new algorithm based on
spatial low pass and spatiotemporal high pass. By eliminating the high spatial frequency
part of nonuniformity and retaining the low spatial frequency part of nonuniformity, the
convergence speed is improved, but ghosts can easily to appear in the scene. Therefore,
Harris et al. [
21
] developed a constant statistical algorithm, which can eliminate most of the
ghosting phenomenon that plagues the nonuniformity correction algorithm and improve
the overall accuracy of image correction. Torres et al. [
22
] developed a scene-based adaptive
nonuniformity correction method, which mainly improves the nonuniformity correction
effect of infrared images by estimating the detection parameters. Jiang et al. [
23
] proposed
a new nonuniformity correction algorithm based on scene matching. By matching two adja-
cent pictures reflecting the same scene, the nonuniformity correction and adaptation to the
drift of nonuniformity with the ambient temperature change are realized. Bai [
24
] proposed
a nonuniformity correction method based on calibration data. Using the neural network
principle work, a correction model integrating the integration time term is constructed.
The model is trained with the blackbody gray image and the corresponding integration
time as the input, the gray mean value of the blackbody image as the expected value. The
obtained correction network can effectively adapt to the nonuniformity caused by the
change of integration time. Yang [
25
] proposed an improved strip noise removal algorithm.
Combining spatial domain and transform domain combined with wavelet transform and
moving window matching algorithm, the accuracy of image nonuniformity correction is
improved. Huang et al. [
26
] proposed an algorithm for selecting the calibration point of
the multipoint method. By taking the residual as the judgment condition for selecting the
calibration point, the calibration point on the focal plane response curve can be adaptively
determined, so that the correction accuracy of the multipoint method has been significantly
improved. Wang et al. [
27
] proposed a nonuniformity correction method with variable
integration time using pixel-level radiation self-correction technology. By establishing the
radiation response equation for each pixel in the infrared detector, the radiation flux map of
the scene is estimated, and the radiation flux map is corrected by using the linear correction
model to realize the nonuniformity correction under any integration time.
1. Nonuniformity correction of infrared image based on two-point calibration [28]
When the gain of the infrared focal plane detector and the component of the DC bias are
inconsistent, multiplication and additive noise are generated. When performing two-point
correction, it is generally believed that each cell of the detector is linear and the thermal
response rate is stable. The infrared thermal imaging system is in an environment where the
ambient temperature changes little, and the external incident infrared energy is within the
calibration temperature range. If the 1/f noise is very small or even negligible, under this
condition, the output expression of the pixel response of the focal plane detector is:
xij (φ) = uijφ+vi j (2)
In Equation (2), it refers to the gain coefficient of the pixel and the DC bias coefficient
of the pixel, both of which are called stable thermal response rates. In this expression, as
long as the input infrared radiation intensity remains unchanged, the response output
of the detector pixel remains unchanged. Figure 5is the schematic diagram of the two-
point temperature correction. Where, b is the output of the standard pixel, a on the left
is the output of the uncorrected pixel, and a on the right is the output of the corrected
pixel, PL and PL are the output values of detector pixels under the uniform radiation of
low-temperature TL and high-temperature TH blackbody.
Sustainability 2022,14, 11161 6 of 26
Sustainability 2022, 14, x FOR PEER REVIEW 6 of 27
and PL are the output values of detector pixels under the uniform radiation of low-tem-
perature TL and high-temperature TH blackbody.
a
b
T
L
T
H
a b
T
L
T
H
P
L
P
H
Response
value
Temperature
Pre-correction After correction
Response
value
Temperature
Figure 5. Schematic diagram of the two-point temperature calibration.
After correction, the original output value of each pixel will be multiplied by the gain
coefficient and the offset coefficient, respectively. The correction process is shown in
Equation (3), and the corrected output expression is shown in Equation (4).
.( )
.()
H ij ij H ij
L ij ij L ij
P Gx O
P Gx O
φ
φ
= +
= +
(3
)
() . ()
ij ij ij ij
y Gx O
φφ
= +
(4
)
Gij and Oij are the gain coefficient and bias coefficient obtained after two-point cor-
rection respectively, and the expressions of bias coefficient Gij and gain coefficient Oij are
shown in Equation (5).
( ) ()
() ( )
() ( )
HL
ij
ij H ij L
H ij L L ij H
ij
ij L ij H
PP
Gxx
p x Px
Oxx
φφ
φφ
φφ
−
=
−
−
=
−
(5
)
The two-point correction method is completed by Equations (3) and (4). The two-
point correction method considers the non-uniformity correction of gain and offset, so
most infrared systems use the continuous point correction method. However, the two-
point calibration is applicable within the calibrated temperature range. If it exceeds this
range, the nonuniformity of the infrared image will be displayed on the infrared image
[29].
2. Nonuniformity correction of infrared image based on multi-point calibration [28]
In practical applications, especially at high and low temperatures, the response ele-
ments of infrared focal plane detectors are generally nonlinear, and the two-point correc-
tion method will inevitably introduce errors. Therefore, multipoint calibration can be used
for correction. Multipoint calibration adopts multiple different temperature points, and
two-point calibration between each temperature point is used for multi-segment linear
simulation. Multipoint temperature calibration reflects the real situation of the nonlinear
response of the focal plane detector. The principle of multipoint temperature correction is
shown in Figure 6.
Figure 5. Schematic diagram of the two-point temperature calibration.
After correction, the original output value of each pixel will be multiplied by the
gain coefficient and the offset coefficient, respectively. The correction process is shown in
Equation (3), and the corrected output expression is shown in Equation (4).
PH=Gij·xi j(φH) + Oij
PL=Gij·xi j(φL) + Oij (3)
yij (φ) = Gij·xi j(φ) + Oij (4)
G
ij
and O
ij
are the gain coefficient and bias coefficient obtained after two-point cor-
rection respectively, and the expressions of bias coefficient G
ij
and gain coefficient O
ij
are
shown in Equation (5).
Gij =PH−PL
xij (φH)−xij (φL)
Oij =pHxi j(φL)−PLxij (φH)
xij (φL)−xij (φH)
(5)
The two-point correction method is completed by Equations (3) and (4). The two-point
correction method considers the non-uniformity correction of gain and offset, so most
infrared systems use the continuous point correction method. However, the two-point
calibration is applicable within the calibrated temperature range. If it exceeds this range,
the nonuniformity of the infrared image will be displayed on the infrared image [29].
2. Nonuniformity correction of infrared image based on multi-point calibration [28]
In practical applications, especially at high and low temperatures, the response ele-
ments of infrared focal plane detectors are generally nonlinear, and the two-point correction
method will inevitably introduce errors. Therefore, multipoint calibration can be used
for correction. Multipoint calibration adopts multiple different temperature points, and
two-point calibration between each temperature point is used for multi-segment linear
simulation. Multipoint temperature calibration reflects the real situation of the nonlinear
response of the focal plane detector. The principle of multipoint temperature correction is
shown in Figure 6.
According to the expression of pixel output of two-point calibration, the mathematical
expression of the corresponding output of each detection element under the radiation of
uniform blackbody with different intensities is shown in Equation (6).
yij (φ1) = Gij ·xi j(φ1) + oij
yij (φ2) = Gij ·xi j(φ2) + oij
. . . . . . .
yij (φk) = Gij·xi j(φk) + oij
(6)
Sustainability 2022,14, 11161 7 of 26
Sustainability 2022, 14, x FOR PEER REVIEW 7 of 27
T
1
T
2
T
3
T
4
Image
output
Temperature
Figure 6. Diagram of multi-point temperature calibration.
According to the expression of pixel output of two-point calibration, the mathemati-
cal expression of the corresponding output of each detection element under the radiation
of uniform blackbody with different intensities is shown in Equation (6).
11
22
() . ()
() .()
... ....
() .()
ij ij ij ij
ij ij ij ij
ij k ij ij k ij
y Gx o
y Gx o
y Gx o
φφ
φφ
φφ
= +
= +
= +
(6
)
The maximum value is taken, and then the two-point calibration method is used for
derivation in sections to obtain the correction formula in the antecedent interval of k − 1
section for multi-point calibration correction, as shown in Equation (7).
11
1
11
( )() ()( )
( ) ()
() ()
( ) () ( ) ()
ij m n m ij m n m
nm nm
n ij
ij m ij m ij m ij m
y y yy
yy
yy
yy yy
φ φ φφ
φφ
φφ
φφ φφ
++
+
++
−
−
= +
−−
(7
)
In Equation (7),
[ ]
1
,
mm
φ φφ
+
∈
,
[ ]
1, 1mk∈−
. At this time, the correction coefficient
equation is shown in Equation (8).
1
1
11
1
( ) ()
( ) ()
( )() ()( )
( ) ()
nm nm
ij
ij m ij m
ij m n m ij m n m
ij
ij m ij m
yy
Gyy
y y yy
Oyy
φφ
φφ
φ φ φφ
φφ
+
+
++
+
−
=
−
−
=
−
(8
)
Then,
() ( ) () ( )
ij ij m ij ij m
Y Gy O
φ φφ φ
= +
(9
)
Equation (9) is a general formula for multipoint correction. From the actual effect, the
effect of multipoint correction is much better than that of two-point correction. The more
calibration points selected, the smaller the correction deviation and the stronger the tem-
perature adaptability.
3. Nonuniformity correction of infrared image based on BP neural network
The infrared image nonuniformity correction based on neural network does not need
calibration, and BP neural network is still the most widely used and mature one. It is a
minimum mapping network and adopts the learning method of minimum mean square
error. BP neural network is actually an error back propagation algorithm. Its basic
Figure 6. Diagram of multi-point temperature calibration.
The maximum value is taken, and then the two-point calibration method is used for
derivation in sections to obtain the correction formula in the antecedent interval of k
−
1
section for multi-point calibration correction, as shown in Equation (7).
yn(φ) = yn(φm+1)−yn(φm)
yij (φm+1)−yij(φm)yi j(φ) + yij (φm+1)yn(φm)−yij (φm)yn(φm+1)
yij (φm+1)−yij(φm)(7)
In Equation (7),
φ∈[φm,φm+1]
,
m∈[1, k−1]
. At this time, the correction coefficient
equation is shown in Equation (8).
Gij =yn(φm+1)−yn(φm)
yij (φm+1)−yij (φm)
Oij =yi j(φm+1)yn(φm)−yi j(φm)yn(φm+1)
yij (φm+1)−yij (φm)
(8)
Then,
Yij(φ) = Gi j(φm)yij (φ) + Oij(φm)(9)
Equation (9) is a general formula for multipoint correction. From the actual effect,
the effect of multipoint correction is much better than that of two-point correction. The
more calibration points selected, the smaller the correction deviation and the stronger the
temperature adaptability.
3. Nonuniformity correction of infrared image based on BP neural network
The infrared image nonuniformity correction based on neural network does not need
calibration, and BP neural network is still the most widely used and mature one. It is a
minimum mapping network and adopts the learning method of minimum mean square
error. BP neural network is actually an error back propagation algorithm. Its basic principle
is that each neuron is connected to a detection unit, and then its information is imported
into the hidden layer for calculation. The calculated value output is given to the output
layer. After the error is obtained by comparing the expected value of the neuron with the
output value, the error beyond the set range is back propagated according to the error
range, that is, the weight is modified. Through reverse learning, the weight coefficient is
modified until the error is less than the set threshold.
3.1.2. Infrared Image Denoising
Due to the influence of detector material, processing method, and external environ-
ment, the infrared image has serious noise, which affects the quality of the infrared image.
Therefore, infrared images need to be denoised to improve the visual quality of infrared
images. At present, the traditional research of infrared image denoising mainly focuses on
spatial domain and transform domain. The specific algorithm research is shown in Figure 7.
Sustainability 2022,14, 11161 8 of 26
Sustainability 2022, 14, x FOR PEER REVIEW 9 of 27
separating the noisy image from the polluted image. The gradient clipping scheme is
adopted in the training stage to prevent the gradient explosion and make the network
converge quickly. The algorithm has good denoising performance. Yang et al. [40] im-
proved the propagation filter algorithm, added an oblique path judgment algorithm, and
made the detected infrared edge complete. The accuracy of image denoising is improved.
Xu et al. [41] proposed an improved compressed sensing infrared image denoising algo-
rithm. Rough denoising of the infrared image using median filter, the sparse transform of
compressed sensing, and observation matrix are used for fine denoising. Make the obser-
vation value retain the important information of the original signal, and finally get the
denoised image through the reconstruction algorithm, the visual effect of the image ob-
tained by this algorithm is close to the original image. It has good denoising performance
in the actual scene.
Infrared image
denoising
Conventional
infrared
Image denoising
Deep learning-based
denoising of infrared images
Multi-layer perceptron network
Convolutional neural network based
infrared image denoising
Space Domain Median filtering
Mean value filtering
Fourier transform
Discrete cosine change
Low-pass filtering
Wavelet transform
Inherent
dimensions
Change of size
Gaussian filtering
Transformation
field
Algorithms based on a
combination of spatial
and frequency domains
Partial differential equation based algorithms
Non-local mean based algorithm
Based on a 3D block matching algorithm
DnCNN
FFDnet
Bilateral filtering
Guided filtering
Figure 7. Research on infrared image denoising algorithms.
1. Infrared image denoising based on depth learning [41]
In recent years, infrared image denoising based on depth learning has become a more
promising denoising method, and gradually become the mainstream. Infrared image de-
noising based on deep learning is mainly divided into multilayer perceptron network
model and infrared image denoising based on convolution neural network. The latter is
based on infrared image denoising including fixed scale and transform scale. Mao et al.
[42] proposed an encoding and decoding network for image denoising. Through multi-
layer convolution and deconvolution operation, the end-to-end mapping between images
is realized. In this method, the convolution and anti-convolution layers are symmetrically
connected by the jumping layer to solve the problem of gradient disappearance. In 2017,
DnCNN, one of the best denoising algorithms based on deep learning, was proposed.
DnCNN draws lessons from the residual learning method in ResNet. Different from Res-
Net, DnCNN does not add a connection and activation every two layers of convolution
but changes the output of the network to the residual image of dry image and recon-
structed image. According to the theory in ResNet, when the residual is 0, the stacking
layers are equivalent to identity mapping, which is very easy to train and optimize. There-
fore, the residual image as the output of the network is very suitable for image reconstruc-
tion. Batch standardization is also used in DnCNN. Adding batch standardization before
activating the function to reduce the shift of internal covariates can bring faster speed and
better performance to the training and make the network have less impact on the initiali-
zation variables. In the second year after DnCNN was published, Zhang et al. [43]
Figure 7. Research on infrared image denoising algorithms.
Donoho et al. [
30
] proposed a curve estimation method based on N noise data, which
minimizes the error of the loss function by shifting the empirical wavelet coefficients by
one amount to the origin. Mihcak et al. [
31
] proposed a spatial adaptive statistical model of
wavelet image coefficients for infrared image denoising. The denoising effect is achieved by
applying the approximate minimum mean square error estimation process to recover the
noisy wavelet image coefficients. Zhang et al. [
32
] proposed an improved mean filtering
algorithm based on adaptive center weighting. The mean filtering result is used to estimate
the variance of Gaussian noise in mixed noise. The estimated results are used to adjust the
filter coefficients. The algorithm has good robustness. However, this algorithm’s protection
of infrared image edge information is limited. It is easy to cause edge blur. Therefore,
Zhang et al. [
33
] proposed an infrared image denoising method based on orthogonal
wavelet transform. While infrared denoising, this method effectively retains the detailed
information of the infrared image and improves the accuracy of image denoising; Buades
et al. [
34
] proposed a classical non-local spatial domain denoising method. By applying the
spatial geometric features of the image, find some representative features of the long edge
on the image, and protect them during denoising. The edge texture of the denoised image
remains clear. However, this method needs to traverse the image many times, resulting in a
large amount of calculation. Dabov et al. [
35
] proposed the classical 3D block matching and
3D filtering (BM3D) denoising method combining spatial domain and transform domain,
which is realized through three consecutive steps: group 3D transformation, transformation
spectrum contraction, and anti 3D transformation. The algorithm has achieved the most
advanced denoising performance in terms of peak signal-to-noise ratio and subjective
visual quality, but the algorithm is complex and difficult to implement in practice. Chen
et al. [
36
] proposed a wavelet infrared image denoising algorithm based on information
redundancy. Wavelet coefficients with similar redundant information are obtained by
different down sampling methods in discrete wavelet changes. The wavelet coefficients
are nonlinearly transformed by noise estimation to suppress high-frequency noise and
retain details. The transformed wavelet coefficients are used to reconstruct multiple images.
The multiple images with similar redundant information are weighted to further remove
the high-frequency noise and obtain the final denoised image. The algorithm has good
robustness. Gao [
37
] proposed an infrared image denoising method based on guided
filtering and three-dimensional block matching, using the quadratic joint filtering strategy,
the excellent performance of dm3d denoising is maintained. The signal-to-noise ratio and
contrast of the image are improved. Divakar et al. [
38
] proposed a new convolutional neural
Sustainability 2022,14, 11161 9 of 26
network architecture for blind image denoising. Using the multi-scale feature extraction
layer to reduce the influence of noise, the feature map adopts the three-step training method.
It uses antagonistic training to improve the final performance of the model. The proposed
model shows competitive denoising performance. Zhang et al. [
39
] proposed a new image
denoising method based on a deep convolution neural network. The potential clear image
can be realized by separating the noisy image from the polluted image. The gradient
clipping scheme is adopted in the training stage to prevent the gradient explosion and
make the network converge quickly. The algorithm has good denoising performance. Yang
et al. [
40
] improved the propagation filter algorithm, added an oblique path judgment
algorithm, and made the detected infrared edge complete. The accuracy of image denoising
is improved. Xu et al. [
41
] proposed an improved compressed sensing infrared image
denoising algorithm. Rough denoising of the infrared image using median filter, the sparse
transform of compressed sensing, and observation matrix are used for fine denoising. Make
the observation value retain the important information of the original signal, and finally
get the denoised image through the reconstruction algorithm, the visual effect of the image
obtained by this algorithm is close to the original image. It has good denoising performance
in the actual scene.
1. Infrared image denoising based on depth learning [41]
In recent years, infrared image denoising based on depth learning has become a more
promising denoising method, and gradually become the mainstream. Infrared image de-
noising based on deep learning is mainly divided into multilayer perceptron network model
and infrared image denoising based on convolution neural network. The latter is based on
infrared image denoising including fixed scale and transform scale. Mao et al. [
42
] proposed
an encoding and decoding network for image denoising. Through multi-layer convolution
and deconvolution operation, the end-to-end mapping between images is realized. In this
method, the convolution and anti-convolution layers are symmetrically connected by the
jumping layer to solve the problem of gradient disappearance. In 2017, DnCNN, one of the
best denoising algorithms based on deep learning, was proposed. DnCNN draws lessons
from the residual learning method in ResNet. Different from ResNet, DnCNN does not
add a connection and activation every two layers of convolution but changes the output
of the network to the residual image of dry image and reconstructed image. According to
the theory in ResNet, when the residual is 0, the stacking layers are equivalent to identity
mapping, which is very easy to train and optimize. Therefore, the residual image as the
output of the network is very suitable for image reconstruction. Batch standardization
is also used in DnCNN. Adding batch standardization before activating the function to
reduce the shift of internal covariates can bring faster speed and better performance to the
training and make the network have less impact on the initialization variables. In the sec-
ond year after DnCNN was published, Zhang et al. [
43
] proposed FFDnet, which provides
a fast denoising solution. In addition to natural image denoising, the denoising algorithm
based on depth learning is also applied to other image denoising. Liu et al. [
44
] combined
convolutional neural network and automatic encoder, proposed DeCS-net suitable for
hyperspectral image denoising, which has good robustness in denoising effect. Zhang
et al. [
45
] proposed a MCN network suitable for speckle noise removal of synthetic aperture
radar image by combining wavelet transform and multi-level convolution connection. The
network is designed through interpretability. Nonlinear filter operator, reliability matrix,
and high-dimensional feature transformation function are introduced into the traditional
consistency a priori. A new adaptive consistency a priori (ACP) is proposed, introducing