Content uploaded by Yuchong Zhang
All content in this area was uploaded by Yuchong Zhang on May 19, 2020
Content may be subject to copyright.
Condition Monitoring for Confined Industrial Process Based on
Infrared Images by Using Deep Neural Network and Variants
Chalmers University of Technology
Chalmers University of Technology
Some industrial processes take place in conned settings only ob-
servable by sensors, e.g. infrared (IR) cameras. Drying processes
take place while a material is transported by means of a conveyor
through a “black box” equipped with internal IR cameras. While
such sensors deliver data at high rates, this is beyond what human
operators can analyze and calls for automation. Inspired by numer-
ous implementations monitoring techniques that analyse IR images
using deep learning, this paper shows how they can be applied
to the conned microwave drying of porous foams, with bench-
marking their eectiveness at condition monitoring to conduct
fault detection. Convolutional neural networks, derived transfer
learning, and deep residual neural network methods are already
regarded as cutting-edge and are studied here, using a set of con-
ventional approaches for comparative evaluation. Our comparison
shows that state-of-the-art deep learning techniques signicantly
benet condition monitoring, providing an increase in fault nding
accuracy of up to 48% over conventional methods. Nevertheless, we
also found that derived transfer learning and deep residual network
techniques do not in our case yield increased performance over
normal convolutional neural networks.
•Computing methodologies → Activity recognition and
Condition monitoring; neural networks; fault detection; deep learn-
ing; confined industrial process
Condition monitoring for a conned industrial process is crucial to
detect undesired and defective products and improve eciency by
reducing faults with production equipment. The detection and diag-
nosis of defects and faults via improved monitoring is an active eld
of research in industrial systems due to its potential for reducing
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from email@example.com.
IVSP 2020, March 20–22, 2020, Singapore, Singapore
© 2020 Association for Computing Machinery.
ACM ISBN 978-1-4503-7695-2/20/03. . . $15.00
maintenance costs [
]. Condition monitoring enables the compar-
ison between regular and erroneous scenarios by use of external
or built-in devices. The progressively increasing sophistication of
industrial equipment brings a greater likelihood of faults due to
their complexity. Therefore, appropriate and eective condition
monitoring must become mandatory in the context of data-driven
Modern industrial facilities are precisely operated and profusely
instrumented, creating large quantities of process data in various
forms. Sensors deliver monitoring data at rates beyond what human
operators can analyze, leading to a growing need for automation.
Typically, experts derive features to categorize conditions including
faults , but cannot handle the sheer amount of data that must be
sorted. For example, conned industrial processes impair an oper-
ator’s ability to inspect visual conditions, and instead produce a
copious number of images and data from sensors, supporting the
need for automatic monitoring [
]. Techniques for monitoring and
detecting must thus be exible and ecient enough to analyse a
large amount of a great variety of data in order to deliver accurate
Figure 1. Schematic representation of feature engineering
Figure 2. Schematic representation of feature engineering
with multiple feature transformations
Conventional “intelligent” methods of condition monitoring are
taxonomically dene d as feature engineering (Figure 1) [
] – also
called shallow learning models [
] – which include Articial Neu-
ral Network [
], Support Vector Machine [
], and Logistic Regres-
] techniques. These well-established and veried methods
have already achieved considerable success in intelligent fault de-
], but are not capable of tackling complicated faults with
low amounts of prior knowledge [
]. These traditional models also
have diculties when processing complex errors, causing undesir-
able consequences, which motivates us to explore more functional
Figure 3. Schematic overview of confined microwave drying of porous foams. Array of Infrared (IR) cameras are mounted on
the ceiling inside of the confined chamber
and powerful methods for feature extraction and function compu-
tation, even in multiplex settings.
To achieve this, we propose to use feature learning (Figure 2)
] instead of feature engineering. Feature learning makes
use of algorithms that create and learn features derived from raw
data, with iterative steps to continue learning from newly obtained
]. Deep learning [
] is typically representative of feature
learning: It is a class of machine learning algorithms that use mul-
tiple layers of nonlinear processing units for feature extraction and
]. With deep learning, every hierarchical layer
can learn from the output from the previous layer and convey the
newly-generated output to the next layer. This allows it to extract
complicated features and resolve multiple complex functions, over-
coming some of the shortcomings of existing methods. There are
many categories of representations in deep learning, such as Deep
Neural Network (DNN) [
], Convolutional Neural Network (CNN)
], Recurrent Neural Network (RNN) [
], and others. Research
has shown that CNN is favorable in the realm of computer vision
] while RNN is superior in Natural Language Processing (NLP)
Nowadays, IR imaging is widely used in industrial processes be-
cause of its excellent performance in measuring temperature dier-
ences, important when monitoring safety-critical equipment [
In a conned industrial process where temperature is a crucial
parameter for evaluating overall performance, IR cameras and IR
imaging are especially useful for monitoring process health. In this
paper, we concentrate on an industrial process called microwave
drying for porous foams, which is operated in a conned chamber
independent from the rest of the process. As illustrated in Figure 3, a
“black-box” is used to cover the foam drying process. An array of IR
cameras is installed on the ceiling of the chamber to capture the com-
plete process. Other sensors (e.g. Microwave Tomography Sensor,
Electrical Capacitance Sensor) are employed either before or after
the drying process to measure the parameters of the foam, but only
the IR cameras can observe the process taking place inside the box.
The main research question here is whether, through analysing the
IR images,a state-of-the-art model-driven deep learning technique
can in this context more eectively determine process conditions
when compared to conventional methods. The contributions of our
Deriving a method to get access to monitor the conned
Deploying state-of-the-art deep learning with its derived
methods and verifying their ecacy.
Making use of IR images to broaden the industrial condition
Comparing feature learning and conventional feature engi-
The layout of this paper is as follows. Section 1 introduces the pro-
posal and describes conventional feature engineering and advanced
feature learning. Section 2 presents related work on deep learn-
ing for condition monitoring and deep learning specications. An
overview of the approaches containing the baseline model we use
is elaborated in Section 3. The results from using IR images based
on deep learning are presented in section 4, followed by discussion
and our conclusions in section 5.
2 RELATED WORK
Deep learning is one of the derivatives of machine learning algo-
rithms which use multiple layers to progressively extract higher
level features from raw input [
]. For instance, when conducting
object detection using images, deep learning assigns shallow layers
to extract rough features like edges while deeper layers are capable
of analyzing more precise features like shapes. DNN and CNN are
prominent techniques within this domain that have proven suc-
cessful over the decades. With emerging technologies being rapidly
adopted, numerous variations of the original algorithms are being
published. Transfer learning is a typical variant of deep learning
that focuses on grafting knowledge obtained from one task to a dif-
ferent but related problem [
]. For example, the knowledge gained
from learning to classify dogs can also be applied to classifying
cats. A conventional conceptuation of transfer learning is shown in
Figure 4. Deep residual neural network is another mutation derived
from deep learning. This is able to carry out process in the same
way as normal DNN and CNN by skipping some mutual connec-
tions to jump over certain layers, but is easier to optimize compared
to general DNN, and can gain accuracy at considerably increased
]. The diagram of a basic deep residual neural network is
conceptualized in Figure 5.
Figure 4. The concept of transfer learning
Figure 5. The concept of a deep residual neural network
Deep learning for condition monitoring or fault diagnosis is ubiq-
uitously employed in industry. Fenton et al. [
] established a re-
search overview on fault detection in electronic systems wherein
the importance of DNN was emphasized. In addition, a retrospec-
tive review of deep learning in machine health monitoring was
conducted by Zhao et al. [
]. They concluded that general deep
learning methods in this context are mainly from Autoencoder
and its variants, Restricted Boltzmann Machines and its variants
including Deep Belief Network (DBN), Deep Boltzmann Machines
(DBM), CNN and RNN. Janssens et al. [
] conducted a compre-
hensive evaluation of feature engineering and feature learning
among several industrial cases, verifying its superiority for auto-
matically determining the condition of machine health, using IR
videos. Keerthi et al. [
] also fed IR videos as input into CNN to
automatically extract the relevant region of an interest’s features,
and subsequently make a prediction regarding their machine’s bear-
ing oil status. Their evaluation showed that the proposed system
achieved an accuracy of 96.67%. Ma et al. [
] implemented a deep
Autoencoder for diagnosing faults based on images and structured
data. A novel model which completely extracted features by DNN
and conducted analysis via a hidden Markov model (HMM) was
proposed by Qiu et al. [
] to handle indistinguishable faults. For an
industrial rolling element bearing (REB) fault classication process,
Amar et al [
] gave an example of creating then training vibration
spectrum images in DNN. Likewise, for a similar REB fault detection
task, Verma et al. [
] developed an autoencoder using intelligent
unsupervised learning towards vibration measurements. Chen et
] benchmarked CNN for condition monitoring and revealed
that their approach had extraordinary performance in comparison
to peer algorithms for fault identication and classication. To
overcome the shortcomings of traditional autoencoders in intelli-
gent fault diagnosis of machines, Jia et al. [
] developed a local
connection network (LCN) based on a reliable dataset constructed
by a normalized sparse autoencoder (NSAE), namely NSAE-LCN,
veried its superiority throughout experimentation, in contrast to
3 DEEP LEARNING FOR CONDITION
We develop a method to automatically monitor process conditions
within a conned microwave oven used to dry porous foams, using
images from IR cameras placed inside the oven. To construct a
dataset large and convincing enough to validate the method, every
condition must be dened accurately. Our methodology rstly ad-
dresses the issue of determining the right set of conditions, secondly
in establishing a set of benchmark networks matched with transfer
learning, thirdly in selecting a set of baseline models.
3.1 Determining Set of Conditions
Our data comes from three distinct drying processes of dierent
foams with dierent drying durations. The three examples of IR
images derived from each drying process are shown in Figure 6,
where dierent colors represent dierent temperature values. A
crucial metric – moisture level of foam – is commonly used in the
evaluation of a drying process and is introduced in our context of
condition determination. It is worth noting that the temperature
value and moisture level of the foam are inversely-proportional;
with higher temperature corresponding to lower moisture level and
In our experimental setting, IR videos recorded the entire drying
process, from the foam being sent into the chamber to its reemer-
gence. We obtained nearly 10000 IR images to store in our dataset.
For further support of our observations, we divided the whole
dataset into a training set and a testing set, at a proportion of 8:2.
Figure 6. Three example IR images from our dataset taken from process 1, 2, and 3 (left to right). Color represents
temperature, thereby also showing moisture level. We intend to define faults based on the number of different moisture
levels in a single image
The overall eight conditions dened are presented in Table. 1. These
conditions incorporate the complete information received from the
three process, although not every single process contains all of the
conditions. Thus, the predened conditions became labels either
for ground-truth indexing in the training phase or outputting in
the testing phase. Among them, conditions 7 and 8 are recognized
as faults if detected.
3.2 Benchmark Networks
As mentioned earlier, CNN has been demonstrated to be capable of
handling images. Its properties, such as local connectivity, weight
sharing, and pooling, contribute to a quick training phase with few
training parameters [
]. A typical CNN consists an input and an
output layer with a number of hidden layers in between. These
layers can be convolutional, pooling, or fully connected:
•The convolutional layer
: Convolutional layers imple-
ment convolution operations between the input—mostly a
matrix shued from input image—and the convolutional
lter, a square matrix. In many state-of-the-art CNN archi-
tectures, an activation function called Rectied Linear Unit
(ReLU), proposed by Krizhevsky et al. [
], is extensively
used, mitigating suering from gradient vanishing.
•The pooling layer
: Max-Pool is the most widespread op-
eration which adopts only the maximum value from every
small region from the previous output.
•The fully-connected layer
: Fully-connected layers are
equal to the normal neural networks where there will be no
In Figure 7, the architecture of the CNN used in our work is dis-
played. There are nine layers in our model, performing feature
extraction, parameter training, and outputting. In the hidden con-
volutional layers, the ReLU function is chosen as the activation
function for feature transformation while the Softmax [
tion fulls the multi-class classication task.
Secondly, we also utilize transfer learning, on the assumption that
a model trained with a designated type of image should be able
to train from a dierent type of image. Thus, we hypothesize that
transfer learning is appropriate for handling our IR images in our
context. Two well-proven transfer learning networks – VGG-16 and
VGG-19 – are used in the trials to verify our assumption. VGG-16
net has 16 weighted layers and VGG-19 holds 19 layers. The specic
application of transfer learning is as follows:
After the removal of the last layer of the whole model, we add
three dense layers we have created ourselves. The resulting
new model consists of the pretrained model and our external
output layer such as the multiple classier with Softmax
function. The principle behind this method is to use the
features and weights learned by the pretrained models to
export the desired results by an increment of self-dened
Concurrently, the deep residual network is used for investigating its
utility. We choose the ResNet-50 [
] as a representative of residual
nets to execute the entire implementation. The ResNet-50 is a 50-
layer residual network with several shortcuts built in, where the
formal training and testing procedures would overlook them.
3.3 Baseline Models
To triangulate our research results, another four conventional fea-
ture engineering methods were also assigned as the baseline models,
to deal with the identical IR dataset as well as our benchmark net-
work processes. A concise introduction for these methodologies is
•Logistic regression (LR)
: LR [
] is a wide-ranging
machine learning algorithm mainly used in binary classi-
cation and recognition. Statistically, its basic formation uses
a logistic function to model a binary classication problem..
•Naïve Bayes (NB)
: The NB classier, belonging to prob-
abilistic classiers, has received much attention since the
]. It applies Bayesian theory into practical problems
with some pre-proposed hypotheses. Common extensions
would be Gaussian naive Bayes, Bernoulli naive Bayes.
•Support Vector Machine (SVM)
: SVM [
], a supervised
learning algorithm which is also mainly used in binary clas-
sications. SVM builds up a hyperplane as boundary for
distinguishing samples from one type to another, making
itself a kind of non-probabilistic binary linear classier .
•Linear Discriminant Analysis (LDA)
: LDA, as nomi-
nally called, conducts a linear analytics to characterize sam-
ples into two or more categories which is widely used in
classication and pattern recognition [
]. In some aspects,
LDA is eligible for performing data dimensionality reduction.
Results of the implementation of training and testing sessions for
the three conned microwave drying processes using both bench-
mark networks and baseline models are shown in Figure 8. Two
predicted examples from testing phase are exhibited, as our algo-
rithms successfully classify them into condition 2 and condition 8
Table 1. Summary of the 8 conditions defined in our dataset. N.A. implies that the corresponding condition does not exist in
Condition Foam moisture level(s) Low Low + Medium Low + Medium + High
Foam entering chamber Condition 1 Condition 2 Condition 3
Foam inside chamber N.A. Condition 4 Condition 5
Foam leaving chamber Condition 6 Condition 7 Condition 8
Figure 7. The schematics of the CNN model used in our experiment
(fault). Accuracy is chosen as the metric for evaluating condition
monitoring. This widely-used criterion species the rate between
the samples correctly classied and the total samples in the dataset,
as illustrated in Equation One. Results for feature learning-based
benchmark networks and feature engineering-based baseline mod-
els are displayed in Tables 2 and 3 as well as the statistical charts
in Figures 9 and 10, while the training accuracy (exploration) and
the testing accuracy (validation) are recorded respectively.
Accuracy (%) =
The number of samples rightly classied
The number of total samples in dataset (1)
From the statistics, we can see how the four feature learning meth-
ods completely outstrip the four conventional feature engineering
methods, based on their superior accuracy acquired both in the
training and testing phases. Deep learning is thus observed to have
satisfactory eciency in analysing the IR images from the monitor-
ing of this process, supporting our research question proposed in
In training results, it is noteworthy that all four benchmark net-
works achieve higher accuracy (nearly 100%) when compared to
the other four baseline models. Among them, CNN, VGG-16, VGG-
19 and ResNet-50 provided 100% accuracy in training process 2.
LR, NB, SVM and LDA are only slightly inferior to the benchmark
networks within the training performance for processes 1 and 2,
however, they show large dierence in accuracy (27% to 38%) when
compared to the deep learning methods employed in process 3.
Likewise, the feature learning-based benchmark networks out- per-
form the baseline models comprehensively in the testing stage.
Deep learning approaches are capable of high testing accuracy; for
example CNN provides 99.71% accuracy when validating process
three. The disparity between baselines and benchmarks is even
more exaggerated. For example, by comparison SVM provides only
52.89% of testing accuracy.
Generally, the performance of transfer learning and deep residual
neural networks is more eective than normal DNN and CNN in the
context of computer vision tasks [
]. However, our research
shows that transfer learning and deep residual network do not
have greater abilities in condition monitoring and prognostic fault
detection within a conned microwave drying process. In fact,
neither VGG networks nor deep residual networks can be shown
to have similar training and testing accuracy when compared to
our general CNN model in three distinct processes.
5 DISCUSSION AND CONCLUSION
With the work presented, we show that the state-of-the-art feature
learning-based tool, CNN and its variants, are advantageous for
condition monitoring in a conned microwave drying process by
using IR images. The merit of feature learning is that it conducts
iterative feature transformation, which conventional feature engi-
neering does not have, making the implementation more reliable.
In addition, we also verify that transfer learning and deep resid-
ual neural networks do not outperform the standard CNN model.
Overall, feature learning-based methods are well-qualied to pro-
vide robust monitoring in various conditions and to detect faults
in a non-visible microwave drying process, as shown in our three
distinct demonstrations. Deep learning transcends traditional con-
dition monitoring methods by a range varying from approximately
3% to 48%, according to our research.
However, there is still room to advance our approaches. First, the
volume of the database should be enlarged to more closely ap-
proximate the massive database that would be required for truly
analogous research. In addition, the thickness, as well as other phys-
ical properties, of the foam chosen in the experiment may aect the
temperature distribution, which may inuence the desired results.
To make the research more robust, a greater variety should now be
Figure 8. The examples of two tested IR images by our proposed methods. The left is predicted as condition 2 (foam has low
and medium moisture levels). The right is classified as condition 8 (foam has low, medium, and high moisture levels), which
is the fault condition detected
Table 2. Overview of the training accuracy (%) conducted by four benchmark networks and four baseline models for the
three confined microwave drying processes via IR images
Accuracy (%) Model used CNN VGG-16 VGG-19 ResNet-50 LR NB SVM LDA
Process 1 98.98 99.59 99.25 98.78 94.89 91.91 95.17 91.63
Process 2 100.00 100.00 100.00 100.00 93.44 89.91 93.13 97.32
Process 3 99.80 99.72 99.76 99.88 72.00 63.26 72.59 61.43
Table 3. Overview of the testing accuracy (%) conducted by four benchmark networks and four baseline models for the three
confined microwave drying processes via IR images
Accuracy (%) Model used CNN VGG-16 VGG-19 ResNet-50 LR NB SVM LDA
Process 1 96.93 97.28 97.03 93.89 90.00 88.75 90.00 90.31
Process 2 98.48 98.04 98.52 98.36 90.25 88.71 85.13 88.97
Process 3 99.71 99.22 99.57 99.65 56.00 57.78 52.89 60.44
Figure 9. Training accuracy data (%) for our proposed four benchmark methods and four baseline models. The dark blue bar,
light blue bar and orange bar represent processes 1, 2, and 3 respectively
Figure 10. Testing accuracy data (%) for our proposed four benchmark methods and four baseline models. The dark blue bar,
light blue bar and orange bar represent processes 1, 2, and 3 respectively
In future work, we will focus on improving the current work to en-
rich the dataset by acquiring a larger amount of dierent IR images
and engaging in a variety of other industrial processes. Additionally,
dening more specic process conditions will be another core task,
where the data may be analyzed in a simultaneous spacious and
This project has received funding from the European Union’s Hori-
zon 2020 research and innovation programme under the Marie
Sklodowska-Curie grant agreement No 764902.
Muhammad Amar, Iqbal Gondal, and Campbell Wilson. 2014. Vibration spectrum
imaging: A novel bearing fault classication approach. IEEE transactions on
Industrial Electronics 62, 1 (2014), 494–502.
Suresh Balakrishnama and Aravind Ganapathiraju. 1998. Linear discriminant
analysis-a brief tutorial. Institute for Signal and information Processing 18 (1998),
Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation
learning: A review and new perspectives. IEEE transactions on pattern analysis
and machine intelligence 35, 8 (2013), 1798–1828.
Ana M Bianco and Elena Martínez. 2009. Robust testing in the logistic regression
model. Computational Statistics & Data Analysis 53, 12 (2009), 4095–4105.
Jingnian Chen, Houkuan Huang, Shengfeng Tian, and Youli Qu. 2009. Feature
selection for text classication with Naïve Bayes. Expert Systems with Applications
36, 3 (2009), 5432–5435.
ZhiQiang Chen, Chuan Li, and René-Vinicio Sanchez. 2015. Gearbox fault identi-
cation and classication with convolutional neural networks. Shock and Vibration
Youngjun Cho, Nadia Bianchi-Berthouze, Nicolai Marquardt, and Simon J Julier.
2018. Deep Thermal Imaging: Proximate Material Type Recognition in the Wild
through Deep Learning of Spatial Surface Temperature Patterns. In Proceedings
of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 2.
Ronan Collobert and Jason Weston. 2008. A unied architecture for natural lan-
guage processing: Deep neural networks with multitask learning. In Proceedings
of the 25th international conference on Machine learning. ACM, 160–167.
Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Machine
learning 20, 3 (1995), 273–297.
Li Deng, Dong Yu, et al
2014. Deep learning: methods and applications. Founda-
tions and Trends®in Signal Processing 7, 3–4 (2014), 197–387.
William G Fenton, T.Martin McGinnity, and Liam P Maguire. 2001. Fault diagnosis
of electronic systems using intelligent techniques: A review. IEEE Transactions
on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 31, 3 (2001),
Steven Gold, Anand Rangarajan, et al
1996. Softmax to softassign: Neural network
algorithms for combinatorial optimization. Journal of Articial Neural Networks
2, 4 (1996), 381–399.
Kasthurirangan Gopalakrishnan, Siddhartha K Khaitan, Alok Choudhary, and
Ankit Agrawal. 2017. Deep Convolutional Neural Networks with transfer learning
for computer vision-based data-driven pavement distress detection. Construction
and Building Materials 157 (2017), 322–330.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual
learning for image recognition. In Proceedings of the IEEE conference on computer
vision and pattern recognition. 770–778.
Olivier Janssens, Rik Van de Walle, Mia Loccuer, and Soe Van Hoecke. 2017.
Deep learning for infrared thermal image based machine health monitoring.
IEEE/ASME Transactions on Mechatronics 23, 1 (2017), 151–159.
Feng Jia, Yaguo Lei, Liang Guo, Jing Lin, and Saibo Xing. 2018. A neural network
constructed by deep learning technique and its application to intelligent fault
diagnosis of machines. Neurocomputing 272 (2018), 619–628.
Alex Krizhevsky, Ilya Sutskever, and Georey E Hinton. 2012. Imagenet classica-
tion with deep convolutional neural networks. In Advances in neural information
processing systems. 1097–1105.
Shih-Chung B Lo, Heang-Ping Chan, Jyh-Shyan Lin, Huai Li, Matthew T Freed-
man, and Seong K Mun. 1995. Articial convolution neural network for medical
image pattern recognition. Neural networks 8, 7-8 (1995), 1201–1214.
R. Rajavignesh M. Keerthi. 2018. Machine Health Monitoring Using Infrared Ther-
mal Image by Convolution Neural Network. International Journal of Engineering
Research Technology (IJERT) 6, 07 (2018), 1–5.
Yan Ma, Zhihong Guo, Jianjun Su, Yufeng Chen, Xiuming Du, Yi Yang, Chengqi
Li, Ying Lin, and Yujie Geng. 2014. Deep learning for fault diagnosis based on
multi-sourced heterogeneous data. In 2014 International Conference on Power
System Technology. IEEE, 740–745.
Tomáš Mikolov, Martin Karaát, Lukáš Burget, Jan Černock
y, and Sanjeev Khu-
danpur. 2010. Recurrent neural network based language model. In Eleventh
annual conference of the international speech communication association.
Jingwei Qiu, Wei Liang, Laibin Zhang, Xuchao Yu, and Meng Zhang. 2015. The
early-warning model of equipment chain in gas pipeline based on DNN-HMM.
Journal of Natural Gas Science and Engineering 27 (2015), 1710–1722.
C Ruiz-Cárcel, VH Jaramillo, D Mba, JR Ottewill, and Y Cao. 2016. Combination
of process and vibration data for improved condition monitoring of industrial
systems working under variable operating conditions. Mechanical Systems and
Signal Processing 66 (2016), 699–714.
B Samanta. 2004. Articial neural networks and genetic algorithms for gear fault
detection. Mechanical Systems and Signal Processing 18 (2004), 1273–1282.
Jürgen Schmidhuber. 2015. Deep learning in neural networks: An overview.
Neural networks 61 (2015), 85–117.
Johan AK Suykens and Joos Vandewalle. 1999. Least squares support vector
machine classiers. Neural processing letters 9, 3 (1999), 293–300.
Mien Van and Hee-Jun Kang. 2015. Bearing defect classication based on individ-
ual wavelet local sher discriminant analysis with particle swarm optimization.
IEEE Transactions on Industrial Informatics 12, 1 (2015), 124–135.
Nishchal K Verma, Vishal Kumar Gupta, Mayank Sharma, and Rahul Kumar
Sevakula. 2013. Intelligent condition based monitoring of rotating machines
using sparse auto-encoders. In 2013 IEEE Conference on Prognostics and Health
Management (PHM). IEEE, 1–7.
 Raymond E Wright. 1995. Logistic regression. (1995).
Junyan Yang, Youyun Zhang, and Yongsheng Zhu. 2007. Intelligent fault diagnosis
of rolling element bearing based on SVMs and fractal dimension. Mechanical
Systems and Signal Processing 21, 5 (2007), 2012–2024.
Jason Yosinski, Je Clune, Yoshua Bengio, and Hod Lipson. 2014. How transfer-
able are features in deep neural networks?. In Advances in neural information
processing systems. 3320–3328.
Guangquan Zhao, Guohui Zhang, Qiangqiang Ge, and Xiaoyong Liu. 2016. Re-
search advances in fault diagnosis and prognostic based on deep learning. In 2016
Prognostics and System Health Management Conference (PHM-Chengdu). IEEE,
Rui Zhao, Ruqiang Yan, Zhenghua Chen, Kezhi Mao, Peng Wang, and Robert X
Gao. 2019. Deep learning and its applications to machine health monitoring.
Mechanical Systems and Signal Processing 115 (2019), 213–237.