Figure 4 - available via license: CC BY
Content may be subject to copyright.
Flow chart of the CPSO algorithm.

Flow chart of the CPSO algorithm.

Source publication
Article
Full-text available
In view of the randomness in the selection of kernel parameters in the traditional kernel independent component analysis (KICA) algorithm, this paper proposes a CPSO-KICA algorithm based on Chaotic Particle Swarm Optimization (CPSO) and KICA. In CPSO-KICA, the maximum entropy of the extracted independent component is first adopted as the fitness fu...

Context in source publication

Context 1
... flow chart of the CPSO algorithm is shown in Figure 4. In order to compare the search ability of CPSO and PSO, this section makes a comparison between the optimization results of the two algorithms under a simple multi-peak function, and the equation of the objective function is: ...

Citations

... The PSO algorithm has the advantages of simple implementation, practicability and fast computation, which is widely used in optimisation and solution. 25,26 Each ...
Article
Full-text available
In the bearing fault diagnosis process using the convolution neural network (CNN), there are some problems, such as complex signal data processing and the complex network parameter setting. A rolling bearing fault diagnosis method is proposed to solve these problems based on improved particle swarm optimization and convolution neural networks with wide kernels in first-layer (IPSO-WCNN). The particle self-adaptive jump out algorithm is proposed to overcome particle swarm optimization (PSO) shortcomings. The adaptive inertia weight and the linear change acceleration coefficients are adopted for improved particle swarm optimization (IPSO). The convolution neural networks with wide kernels in first-layer (WCNN) fault diagnosis method is proposed for one-dimensional rolling bearing vibration signals, and the parameters of the WCNN is optimised by IPSO. According to the verification experiments, the proposed method can get higher accuracy than others with good adaptability.
... At the same time, a bearing fault diagnosis method via an LSSVM identification model was presented. Liu et al. [35] established a fault detection model based on a chaotic PSO algorithm and a kernel-independent component analysis, and the simulation results showed that the optimization method can avoid the phenomenon of the PSO algorithm's susceptibilty to falling into a local extremum. Furthermore, an improved PSO-and SVM-based fault diagnosis methodology was presented in [36] to predict faults in nuclear power plants. ...
Article
Full-text available
Fault diagnosis is a challenging topic for complex industrial systems due to the varying environments such systems find themselves in. In order to improve the performance of fault diagnosis, this study designs a novel approach by using particle swarm optimization (PSO) with wavelet mutation and least square support (LSSVM). The implementation entails the following three steps. Firstly, the original signals are decomposed through an orthogonal wavelet packet decomposition algorithm. Secondly, the decomposed signals are reconstructed to obtain the fault features. Finally, the extracted features are used as the inputs of the fault diagnosis model established in this research to improve classification accuracy. This joint optimization method not only solves the problem of PSO falling easily into the local extremum, but also improves the classification performance of fault diagnosis effectively. Through experimental verification, the wavelet mutation particle swarm optimazation and least sqaure support vector machine ( WMPSO-LSSVM) fault diagnosis model has a maximum fault recognition efficiency that is 12% higher than LSSVM and 9% higher than extreme learning machine (ELM). The error of the corresponding regression model under the WMPSO-LSSVM algorithm is 0.365 less than that of the traditional linear regression model. Therefore, the proposed fault scheme can effectively identify faults that occur in complex industrial systems.
... • Single-factor methods: Pearson correlation (Pearson-Corr), 20 Spearman correlation (SpearmanCorr), 21 dis-tance correlation (DistCorr), 22 mutual information (MI), 23,24 and maximal information coefficient (MIC) 25 • Optimization-based methods: genetic algorithm (GA) 26 and particle swarm optimization (PSO) 27 • Recursive feature elimination (RFE) 28 • Information entropy-based methods: joint mutual information (JMI), 29 joint mutual information maximization (JMIM), 30 minimum-redundancy maximum-relevance (MRMR), 31,32 and conditional mutual information maximization (CMIM) 33 • Random forest-based methods: mean decrease impurity (MDI) 34 and mean decrease accuracy (MDA) 35 • Regularization-based methods: Lasso, 36 Ridge, 37 and Elastic Net 38 • Feature extraction methods: PCA, KPCA, PLS, LLE, and LDA. Some of the above methods, including PearsonCorr, SpearmanCorr, DistCorr, MI, MIC, JMI, JMIM, MRMR, CMIM, MDI, and MDA, can assign importance scores to features and then sort them. ...
... This is because both approaches entail differentiating the statistical index, which is difficult if the chain involves a kernel function [86]. Nevertheless, many researchers have derived analytical expressions for either kernel contributions-based diagnosis [66,79,81,83,87,94,119,127,133,136,146,150,156,157,162,164,194,213,241,268,275,276,278,279,288,289,293] or kernel reconstructions-based diagnosis [86,117,140,155,161,163,176,217,236,254,265,285]. However, most derivations are applicable only when the kernel function is the RBF, Equation (5). ...
... More concretely, the goal is usually to maximize negentropy, which is a measure of the distance of a distribution from Gaussianity [309]. Kernel ICA can be performed by doing kernel PCA for whitening, followed by linear ICA, as did many researchers [66,72,73,82,90,97,100,106,107,133,140,145,154,155,157,188,203,213,233,239,265,275,276,283,305]. ...
... Other criteria for optimizing kernel parameters were proposed in [183]. Some search techniques include the bisection method [162], Tabu search [247,250,274], particle swarm optimization [184,276], differential evolution [184], and genetic algorithm [84,93,102,108,154]. More recent studies have emphasized that kernel parameters must be optimized simultaneously with the choice of latent components (e.g., no. of kernel principal components) since these choices depend on each other [67,68]. ...
Article
Full-text available
Kernel methods are a class of learning machines for the fast recognition of nonlinear patterns in any data set. In this paper, the applications of kernel methods for feature extraction in industrial process monitoring are systematically reviewed. First, we describe the reasons for using kernel methods and contextualize them among other machine learning tools. Second, by reviewing a total of 230 papers, this work has identified 12 major issues surrounding the use of kernel methods for nonlinear feature extraction. Each issue was discussed as to why they are important and how they were addressed through the years by many researchers. We also present a breakdown of the commonly used kernel functions, parameter selection routes, and case studies. Lastly, this review provides an outlook into the future of kernel-based process monitoring, which can hopefully instigate more advanced yet practical solutions in the process industries.
... With the constant increase of market demands for high-quality products, process monitoring and state evaluation technology become more and more attractive [1][2][3]. It is of great economic value to monitor the production process in real-time. ...
Article
Full-text available
In the hot strip rolling process, many process parameters are related to the quality of the final products. Sometimes, the process parameters corresponding to different steel grades are close to, or even overlap, each other. In reality, locating overlap regions and detecting products with abnormal quality are crucial, yet challenging. To address this challenge, in this work, a novel method named kernel entropy component analysis (KECA)-weighted cosine distance is introduced for fault detection and overlap region locating. First, KECA is used to cluster the training samples of multiple steel grades, and the samples with incorrect classes are seen as the boundary of the sample distribution. Next, the concepts of recursive-based regional center and weighted cosine distance are introduced. For each steel grade, the regional center and the weight coefficients are determined. Finally, the weighted cosine distance between the testing sample and the regional center is chosen as the index to judge abnormal batches. The samples in the overlap region of multiple steel grades need to be focused on in the real production process, which is conducive to quality grade and combined production. The weighted cosine distances between the testing sample and different regional centers are used to locate the overlap region. A dataset from a hot steel rolling process is used to evaluate the performance of the proposed methods.
Article
Nowadays, how to select the kernel function and their parameters for ensuring high-performance indicators in fault diagnosis applications remains as two open research issues. This paper provides a comprehensive literature survey of kernel-preprocessing methods in condition monitoring tasks, with emphasis on the procedures for selecting their parameters. Accordingly, twenty kernel optimization criteria and sixteen kernel functions are analyzed. A kernel evaluation framework is further provided for helping in the selection and adjustment of kernel functions. The proposal is validated via a KPCA-based monitoring scheme and two well-known benchmark processes.
Article
Due to the importance and dangerousness of large-scale production processes, the accuracy and reliability of fault diagnosis approaches are critical for safe operation. In this paper, a robust fault diagnosis approach is proposed to realize the reliable classification while ensuring high accuracy. The feature importance distribution is proposed to select appropriate dimension reduction methods, and the real data structure is remained by the same-scale standardization and same-criterion dimension reduction. By means of the whole procedure optimization for datasets and classifiers, the performances of traditional Support Vector Machine and Naive Bayes can achieve the level of ensemble learning. Next the parallel classifier which consists of different classification theories is able to improve the reliability of final prediction. Experimental results show that the proposed approach outperforms the traditional approaches with accuracy that exceeds 92% for Tennessee Eastman benchmark (18 faults) and exceeds 87% for a real-world three-phase flow process (2 faults).