Article

Intelligent monitoring by interfacing knowledge-based systems and multivariate statistical monitoring

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

An intelligent process monitoring and fault diagnosis environment has been developed by interfacing multivariate statistical process monitoring (MSPM) techniques and knowledge-based systems (KBS) for monitoring multivariable process operation. The real-time KBS developed in G2 is used with multivariate SPM methods based on canonical variate state space (CVSS) process models. Fault detection is based on T 2 charts of state variables. Contribution plots in G2 are used for determining the process variables that have contributed to the out-of-control signal indicated by large T 2 values, and G2 Diagnostic Assistant (GDA) is used to diagnose the source causes of abnormal process behavior. The MSPM modules developed in Matlab are linked with G2. This intelligent monitoring and diagnosis system can be used to monitor multivariable processes with autocorrelated, crosscorrelated, and collinear data. The structure of the integrated system is described and its performance is illustrated by simulation studies.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Both PCA-and PLS-based contribution plots are limited in their ability to quickly identify faults because the underlying PCA and PLS methods do not produce the most accurate dynamic models, even when lagged data are used in their construction. This drawback has been recognized, and contribution plots in conjunction with state-space models have been carried out to better take into account the process dynamics [23][24][25][26]29]. In a few studies [23][24][25], subspace system identification based on N4SID has been utilized to obtain a state-space model that was used to construct contribution plots. ...
... Although CVA has been demonstrated to outperform N4SID for subspace identification in terms of stability and parsimony (fewer parameters) in the representation of dynamic systems [26][27][28], investigation on the application of contribution plots to CVA is limited. In one study where contribution plots and CVAbased state-space models were investigated [29], contributions were calculated based on the statistics of the states in the statespace model. In the study [29], process inputs were not considered in the state-space model. ...
... In one study where contribution plots and CVAbased state-space models were investigated [29], contributions were calculated based on the statistics of the states in the statespace model. In the study [29], process inputs were not considered in the state-space model. ...
Article
While canonical variate analysis (CVA) has been used as a dimensionality reduction technique to take into account serial correlations in the process data with system dynamics, its effectiveness in fault identification (i.e., identification of variables most closely associated with a fault) in industrial processes has not been extensively investigated. This paper proposes CVA-based contributions for fault identification, where two types of contributions are developed based on the variations in the canonical state space and in the residual space. The two contributions are used to categorize faulty variables into state-space faulty variables (SSFVs) and residual-space faulty variables (RSFVs), which enhances the understanding of the character of each fault as well as the performance of fault monitoring based on different statistics. The effectiveness of the proposed approach is demonstrated on the Tennessee Eastman process. The simulation results show that the faulty variables identified by the CVA-based contributions can impact the statistics of the state space, the residual space, or both; and abnormal events are observed to be more often linked to faulty variables in the residual space rather than in the state space.
... When the number of variables is large, analyzing contribution plots and corresponding variable plots to reason about the source cause of the abnormality may become tedious and challenging. This analysis can be automated and linked with real-time diagnosis (Norvilas et al. 2000;Undey et al. 2000) by using knowledgebased systems. ...
... Analysis of contribution plots can be automated and linked with fault diagnosis by using real-time knowledge-based systems (KBS). The integration of statistical detection tools and contribution plots with fault diagnosis by using a supervisory KBS has been illustrated for both continuous (Norvilas et al. 2000) and batch processes (Undey et al. 2003a(Undey et al. , 2004). ...
Book
In an age of heightened nutritional awareness, assuring healthy human nutrition and improving the economic success of food producers are top priorities for agricultural economies. In the context of these global changes, new innovative technologies are necessary for appropriate agro-food management from harvest and storage, to marketing and consumer consumption. Optical Monitoring of Fresh and Processed Agricultural Crops takes a task-oriented approach, providing essential applications for a better understanding of non-invasive sensory tools used for raw, processed, and stored agricultural crops. This authoritative volume presents interdisciplinary optical methods technologies feasible for in-situ analyses, such as: Vision systems VIS/NIR spectroscopy Hyperspectral camera systems Scattering Time and spatial-resolved approaches Fluorescence Sensorfusion Written by an Internationally Recognized Team of Experts Using a framework of new approaches, this text illustrates how cutting-edge sensor tools can perform rapid and non-destructive analysis of biochemical, physical, and physiological properties, such as maturity stage, nutritional value, and neoformed compounds appearing during processing. These are critical components to maximizing nutritional quality and safety of fruits and vegetables and decreasing economic losses due to produce decay. Quality control systems are quickly gaining a foothold in food manufacturing facilities, making Optical Monitoring of Fresh and Processed Agricultural Crops a valuable resource for agricultural technicians and developers working to maintain nutritional product value and approaching a fine-tuned control process in the crop supply chain.
... Cinar et al have successfully combined multivariate statistical data analysis with expert systems for process fault diagnosis. Basically the multivariate statistical data analysis module developed in MATLAB was converted into C code, and then linked with G2 expert system through a G2 standard interface (GSI) link [32], [34]. ...
... Cinar et al have successfully combined multivariate statistical data analysis with expert systems for process fault diagnosis. Basically the multivariate statistical data analysis module developed in MATLAB was converted into C code, and then linked with G2 expert system through a G2 standard interface (GSI) link [32], [34]. Cinar et al exploited only the G2 diagnostic assistant (GDA) capability (i.e., a graphical design tool similar to Simulink/MATLAB). ...
Conference Paper
Full-text available
Intelligent control and asset management for the petroleum industry is crucial for profitable oil and gas facilities operation and maintenance. A research program was initiated to study the feasibility of an intelligent asset management system for the offshore oil and gas industry in Atlantic Canada. The research program has achieved several milestones. The conceptual model of an automated asset management system, its architecture, and its behavioral model have been defined (1, 2). Furthermore, an implementation plan for such system has been prepared, and the appropriate development tools have been chosen (3). A system reactive agent structure was defined based on the MATLAB environment, and its communication requirements were analyzed and validated (31). This paper builds on the previous work and proposes a general structure of the ICAM system intelligent supervisory agent and its software implementation. We also describe the software implementation using the G2 expert system development environment. Furthermore, we analyze and define the autonomy requirements of the reactive agents of such system. Asset management and control of modern process plants involve many tasks of different time-scales and complexity including data reconciliation and fusion, fault detection, isolation, and accommodation (FDIA), process model identification and optimization, and supervisory control. The automation of these complementary tasks within an information and control infrastructure will reduce maintenance expenses, improve utilization and output of manufacturing equipment, enhance safety, and improve product quality. Many research studies proposed different combinations of systems theoretic and artificial intelligence techniques to tackle the asset management problem, and delineated the requirements of such system (4), (5), (6).
... Since PCA and DPCA assume stationarity in the variables [7], during modeling process, high rate of false alarms are generated in the diagnosis stage if the test data are non-stationary, [8]. The non stationary problem has been tackled with adaptive versions of PCA, in the framework of the principal components subspace, as some authors suggest in [9] and [10]; however, they can not be used for FDI tasks since the adaptation is based in the variations of the statistical parameters of observations and then all kind of changes are assumed as variations in the operation point. ...
... In a fault detection scheme based in DPCA, an actual observation − → x a is classified with regard to the mean, standard deviation and correlation structure of the implicit model. However, when the operation point of the system changes, the classification procedure generates a false alarm because the standardization of − → x a is carried out with respect to the historical means (5) and standard deviations (7). As conclusion the DPCA model can not be tuned to a new operation point. ...
Conference Paper
Full-text available
In this paper the false alarms issue, generated by a fault detection scheme based in a dynamic principal components analysis (DPCA) is tackled. This problem occurs if the nominal operation point of the system changes. It is shown how input-output models can be identified simultaneously from the DPCA based statistical modeling, and how this deterministic models can be used in a complementary way in order to develop a fault detection scheme able to distinguish between the variations due to changes in the operation point and those due to faults. During the monitoring stage, we propose to use a fixed statistical model and an adaptive standardization of the input and output signals. The effectiveness of the proposed methodology is evaluated for fault detection in an interconnected tanks system.
... Studies have also been carried out to improve the failure detection technique by combining the PCA method with other methods [28]. The integration of PCA with experts has also given very good results [29], [30]. ...
Article
This paper describes the combination of statistical techniques and mathematical modeling in order to developed a fault detection system in a 2 MW natural gas engine under actual operation conditions. The Mixing chamber, turbochargers, intake and exhaust manifolds, cylinders, throttle and bypass valves, and the electric generator, which are the main components of the gas engine, were studied under a mean value engine to complement the statistical analysis. Objective: The main objective of this paper is to integrate two approaches in order to relate the faults with the changes of mean thermodynamic values of the system, helping to sustain the engine in optimal operating conditions in terms of reliability. The Principal Component Analysis (PCA), a multivariate statistical fault detection technique, was used to analyze the historical data from the gas engine to detect abnormal operation conditions, by means of statistical measures such as Square Prediction Error (SPE) and T2. These abnormal operation conditions are categorized using cluster techniques and contributions plots, to later examine its causes with the support of the results of a mean value mathematical model proposed for the system. The integration of the proposed methods allowed successfully identify which component or components of the engine might be malfunctioning. Once combined, these two methods were able to accurately predict and identify faults as well as shut downs of the gas engine during a month of operation. Statistical analysis was used to detect faults on a 2 MW industrial gas engine, also the result were compared with a mean value model in order to detect variations of the thermodynamic properties of the system at abnormal conditions.
... CVA was proposed by Hotelling in 1936 (Norvilas et al., 2000). It aims to maximize the correlation between the past and the future data sets, making it more reliable to predict future industrial processes. ...
Article
Constructing an efficient and stable fault diagnosis method is crucial for industrial production. Most classical methods are inadequate in process diagnosis, as their assumptions are difficult to satisfy in real production processes. For instance, a Gaussian distribution is required for principal component analysis (PCA), canonical analysis (CVA) is only compatible with linear industrial processes, and independent component analysis (ICA) is not useful for dynamic processes. In this paper, we have proposed a novel method, Canonical Variate and Kernel Independent Component Analysis (CV–KICA), which combines the advantages of CVA and KICA, and tested this method with the Tennessee Eastman (TE) process. We first use ICA to suppress the impacts of noises in industrial production, and introduce kernel to ICA for adapting to non-linearity, then integrate the resulting components into the CVA method for dynamic processes. Simulations and experimental results with the TE process indicate that the CV–KICA method outperform other classical diagnosis methods such as CVA, KICA and DKICA in fault diagnose, providing a novel approach that could handle dynamic and nonlinear situations in real production processes.
... Different from t i in Equations (3) and (4), vector t is used to represent the scores of a sample of an engine condition in all PCs. The Hotelling's T 2 , which is also known as the Mahalanobis distance [27], is used to evaluate the variations of the observations in the PCS, which is defined as ...
Article
Full-text available
The problem of timely detecting the engine faults that make engine operating parameters exceed their control limits has been well-solved. However, in practice, a fault of a diesel engine can be present with weak signatures, with the parameters fluctuating within their control limits when the fault occurs. The weak signatures of engine faults bring considerable difficulties to the effective condition monitoring of diesel engines. In this paper, a multivariate statistics-based fault detection approach is proposed to monitor engine faults with weak signatures by taking the correlation of various parameters into consideration. This approach firstly uses principal component analysis (PCA) to project the engine observations into a principal component subspace (PCS) and a residual subspace (RS). Two statistics, i.e., Hotelling’s T2 and Q statistics, are then introduced to detect deviations in the PCS and the RS, respectively. The Hotelling’s T2 and Q statistics are constructed by taking the correlation of various parameters into consideration, so that faults with weak signatures can be effectively detected via these two statistics. In order to reasonably determine the control limits of the statistics, adaptive kernel density estimation (KDE) is utilized to estimate the probability density functions (PDFs) of Hotelling’s T2 and Q statistics. The control limits are accordingly derived from the PDFs by giving a desired confidence level. The proposed approach is demonstrated by using a marine diesel engine. Experimental results show that the proposed approach can effectively detect engine faults with weak signatures.
... With the growing demands of process safety and quality consistency, process monitoring has become a hotspot of research in the past decades, and various monitoring methods have been proposed. Generally, the traditional process monitoring methods can be classified into three categories: model-based methods, [1] knowledge-based methods, [2] and data-based methods. [3] Modelbased methods require the exact relationships of different variables, and knowledge-based methods are based on the available knowledge of the process behaviour and the experience of the process operations. ...
Article
Full-text available
Multivariate statistical process monitoring (MSPM) methods are significant for improving production efficiency and enhancing safety. However, to the authors' best knowledge, there is no survey paper providing statistics of published papers over the past decade. In this paper, several issues related to MSPM methods are reviewed and studied. First, the annual publication numbers of journal articles concerning MSPM are provided to show the active development of this important research field and to point out several promising directions in the future. Second, the annual numbers of patents are also shown to demonstrate the practicality of different MSPM methods. Particularly, this paper also lists and analyzes the number of MSPM‐related publications in China. The statistics indicate that Chinese researchers and engineers may have different viewpoints from those of other countries, which results in different development trends of MSPM in China. This article is protected by copyright. All rights reserved
... Contribution plots can be used to determine the variables that inflated T2 and SPE (Kourti and MacGregor, 1996). The development of contribution plots can be automated and linked with real-time diagnosis (Norvilas et al., 1997). MP CA is equivalent to performing ordinary PCA on a large two-dimensional matrix constructed by unfolding the three-way array. ...
Article
On-line real-time monitoring of fermentation processes is a crucial task. Conventional on-line process monitoring techniques and an adaptive hierarchical Principal Component Analysis technique were applied. A hybrid supervisory knowledge-based system with a heuristic rule-base was also developed and integrated with on-line monitoring techniques for providing real-time monitoring and supervision.
... These two operational tasks are the major components of the traditional statistical process control. [3] In this work we assess the applicability of combing a nonlinear kernel technique of kernel Fisher discriminant analysis (KFDA) with a preprocessing method as a tool for on-line fault identification. Using a simulation process this work compare its performance to existing linear principal component analysis (PCA)-based identification technique. ...
Article
Real-time process monitoring and diagnosis of industrial processes is one of important operational tasks for quality and safety reasons. The objective of fault diagnosis or identification is to find process variables responsible for causing a specific fault in the process. This helps process operators to investigate root causes more effectively. This work assesses the applicability of combining a nonlinear statistical technique of kernel Fisher discriminant analysis with a preprocessing method as a tool of on-line fault identification. To compare its performance to existing linear principal component analysis (PCA) identification scheme, a case study on a benchmark process was performed to show that the fault identification scheme produced more reliable diagnosis results than linear method.
... Stochastic methods are included into signal-based FDI techniques: these methods reduce the correlations between variables and the dimensionality of the data [15][16][17], and enable efficient extraction of the relevant information from the data. A widely applied stochastic monitoring technique is Canonical Variate Analysis (CVA), introduced in 1936 by Hotelling [18]. In FDI context, Feature Selection (FS) is extensively investigated in the literature by several alternative terms such as attribute weighting, dimension reduction, and so on [19,20]. ...
Article
In the residential energy sector there is a growing interest in smart energy management systems able to monitor, manage and minimize energy consumption. A key factor to curb household energy consumption is the amendment of occupant erroneous behaviors and systems malfunctioning. In this scenario energy efficiency benefits can be either amplified or neutralized by, respectively, good or bad practices carried out by end users. Authors propose a diagnostic system for a residential microgrid application able to detect faults and occupant bad behaviors. In particular a nonlinear monitoring method, based on Kernel Canonical Variate Analysis, is developed. To overcome the normality assumption regarding the signals probability distribution, Upper Control Limits are derived from the estimated Probability Density Function through Kernel Density Estimation. The proposed method, applied to a smart residential microgrid, is tested on experimental data acquired from July 2012 to October 2013.
... Canonical Variate Analysis (CVA), introduced in 1936 by Hotelling [12], have been developed with Upper Control Limit (UCL) based on the Gaussian assumption. Recently numerical Probability Density Function (PDF) estimation techniques are introduced for UCL where Gaussian assumption is not recognized [9], [13]. ...
Conference Paper
Full-text available
In the contest of household energy management, a growing interest is addressed to smart system development, able to monitor and manage resources in order to minimize wasting. One of the key factors in curbing energy consumption in the household sector is the amendment of occupant erroneous behaviours and systems malfunctioning, due to the lack of awareness of the final user. Indeed the benefits achievable with energy efficiency could be either amplified or neutralized by, respectively, good or bad practices carried out by the final users. Authors propose a diagnostic system for home energy management application able to detect faults and occupant behaviours. In particular a nonlinear monitoring method, based on Kernel Canonical Variate Analysis, is developed. To remove the assumption of normality, Upper Control Limits are derived from the estimated Probability Density Function through Kernel Density Estimation. The proposed method is applied to smart home temperature sensors to detect anomalies respect to efficient user behaviours and sensors and actuators faults. The method is tested on experimental data acquired in a real apartment.
... Combing data-driven methods with qualitative modelling is not a new concept. Typical qualitative models include signed directed graph (SDG) (Vedam & Venkatasubramanian, 1999; Lee, Han & Yoon, 2004), expert system (Norvilas et al., 2000), cause-effect models (Leung & Romagnoli, 2002; Chiang & Braatz, 2003), plant connectivity modelling using extensible markup language (XML) (Thambirajah et al., 2009), among others. To handle the multivariate nature of process variables, the PCA-SDG hybrid (Vedam & Venkatasubramanian, 1999) is a well-known and effective method for fault detection and diagnosis. ...
Article
Root cause analysis is an important method for fault diagnosis when used with multivariate statistical process monitoring (MSPM). Conventional contribution analysis in MSPM can only isolate the effects of the fault by pinpointing inconsistent variables, but not the underlying cause. By integrating reconstruction-based multivariate contribution analysis (RBMCA) with fuzzy-signed directed graph (SDG), this paper developed a hybrid fault diagnosis method to identify the root cause of the detected fault. First, a RBMCA-based fuzzy logic was proposed to represent the signs of the process variables. Then, the fuzzy logic was extended to examine the potential relationship from causes to effects in the form of the degree of truth (DoT). An efficient branch and bound algorithm was developed to search for the maximal DoT that explains the effect, and the corresponding causes can be identified. Except for the need to construct an SDG for the process, this method does not require historical data of known faults. The usefulness of the proposed method was demonstrated through a case study on the Tennessee Eastman benchmark problem.
... , García Márquez et al. (2007a),Norvilas et al. (2000) andWeidl et al. (2005). ...
... While the above mentioned work has been mainly done by the electrical engineering community, sensor FDI in the process control community (also known as sensor validation) has also been an active area of research [10,17,21,26,32]. A detailed survey on sensor validation has been given by [7]. ...
Article
This paper proposes a novel approach to detection and isolation of faulty sensors in multivariate dynamic systems. After formulating the problem of sensor fault detection and isolation in a dynamic system represented by a state space model, we develop the optimal design of a primary residual vector for fault detection and a set of structured residual vectors for fault isolation using an extended observability matrix and a lower triangular block Toeplitz matrix of the system. This work is, therefore, a vector extension to the earlier scalar-based approach to fault detection and isolation. Besides proposing a new algorithm for consistent identification of the Toeplitz matrix from noisy input and output observations without identifying the state space matrices {A, B, C, D} of the system, the main contributions of this newly proposed fault detection and isolation scheme are: (1) a set of structured residual vectors is employed for fault isolation; (2) after determination of the maximum number of multiple sensors that are most likely to fail simultaneously, a unified scheme for isolation of single and multiple faulty sensors is proposed; and (3) the optimality of the primary residual vector and the structured residual vectors is proven. We prove the advantage of our newly proposed vector-based scheme over the existing scalar element-based approach for fault isolation and illustrate its practicality by simulated and experimental evaluation on a multivariate pilot scale, computer interfaced system.
... A contribution plot approach is adopted in the VAV terminal sensor FDD strategy developed in this study for fault isolation, which has been used recently by a few researchers in different engineering fields [19,20,[27][28][29]. This approach compares the contribution of each variable in T 2 and SPE when fault(s) have been detected. ...
Article
Sensor failure and bias are harmful to the process control of air conditioning systems, resulting in poor control of the indoor environment and waste of energy. A strategy is developed for the flow sensor fault detection and validation of variable air volume (VAV) terminals in air conditioning systems. Principal component analysis (PCA) models at both system and terminal levels are built and employed in the strategy. Sensor faults are detected using both the T2 statistic and square prediction error (SPE) and isolated using the SPE contribution plot. As the reliability and sensitivity of fault isolation may be affected by multiple faults at the system level, a terminal level PCA model is designed to further examine the suspicious terminals. The faulty sensor is reconstructed after it is isolated by the strategy, and the FDD strategy repeats using the recovered measurements until no further fault can be detected. Thus, the sensitivity and robustness of the FDD strategy is enhanced significantly. The sensor fault detection and validation strategy, as well as the sensor reconstruction strategy for fault tolerant control, are evaluated by simulation and field tests.
... Relevant methodological issues and some solutions have been discussed in [1] on the basis of selecting amongst models that employ a different level of abstraction. Current FRTS research directions involve the distribution of experiments over a network of workstations, intelligent control [2] and fault diagnosis [3], interactive-dynamic simulation (i.e., manipulation of the simulator by the user in RT) [4]. A conceptual methodology for conducting FRTS, meeting the RT constraints, has been introduced in [5]. ...
Conference Paper
Full-text available
Even though numerous applications use faster-than-real-time simulation (FRTS), there is no concrete mechanism that focuses on planning and execution of the experiment. Having previously introduced a conceptual FRTS methodology, we adopt the Discrete EVent System Specification (DEVS) specification and, specifically, RT-DEVS, and propose a specification for developing a DEVS-based faster-than-real-time (FRT) simulator. For this purpose, we adopt the architecture for real-time (RT) simulators, the corresponding specification and application programming interfaces (APIs) presented in a previous work. Due to the hard RT constraints, we also examine tim- ing issues concerning the execution of simulation activities and finally comment on the use of DEVS. Based on the specification provided, design and implementation of FRTS systems can be realized for diverse domains.
... It is important to note that like other multivariate statistical methods, PCA works under three assumptions: the data follows a multivariate normal distribution; there exist no auto-correlation among observations; and the variables are stationary, this is, the variables should keep constant mean and standard deviation over time [6], [7]. In the case of data with non normal distribution it is possible to carry out an appropriate transformation like square root or logarithm [8], in order to improve the distribution of data. ...
Article
The Dynamic Principal Component Analysis is an adequate tool for the monitoring of large scale systems based on the model of multivariate historical data under the assumption of stationarity, however, false alarms occur for non-stationary new observations during the monitoring phase. In order to reduce the false alarms rate, this paper extends the DPCA based monitoring for non-stationary data of linear dynamic systems, including an on-line means estimator to standardize new observations according to the estimated means. The effectiveness of the proposed methodology is evaluated for fault detection in a interconnected tanks system.
Article
Cross-time spatial dependence (i.e., the interaction between different variables at different time points) is indispensable for detecting anomalies in multivariate time series, as certain anomalies may have time delays in their propagation from one variable to another. However, accurately capturing cross-time spatial dependence remains a challenge. Specifically, real-world time series usually exhibits complex and incomprehensible evolutions that may be compounded by multiple temporal states (i.e., temporal patterns, such as rising, fluctuating, and peak). These temporal states mix and overlap with each other and exhibit dynamic and heterogeneous evolution laws in different time series, making the cross-time spatial dependence extremely intricate and mutable. Therefore, a cross-time spatial graph network with fuzzy embedding is proposed to disentangle latent and mixing temporal states and exploit it to meticulously learn cross-time spatial dependence. First, considering that temporal states are diversiform and their mixing modes are unknown, we introduce a fuzzy state set to uniformly characterize potential temporal states and adaptively generate corresponding membership degrees to depict how these states mix. Further, we propose a cross-time spatial graph, quantifying similarities among fuzzy states and sensing their dynamic evolutions, to flexibly learn mutable cross-time spatial dependence. Finally, we design state diversity and temporal proximity constraints to ensure the differences among fuzzy states and the evolution continuity of fuzzy states. Experiments on real-world datasets show that the proposed model outperforms the state-of-the-art models.
Article
Big Data will revolutionize modern industry by improving process optimization, facilitating insight discovery, and improving decision-making. This big data revolution presents a multitude of possibilities and challenges in evolving from traditional batch processes to smart batch processes. This tremendous potential requires the ability to extract value from vast amounts of industrial process data. Using a new three-dimensional (3D) perspective of time, batch, and operational context, this paper explores smart batch processes with higher efficiency, greater profitability, and longer sustainability. First, we review the traditional one-dimensional (1D) perspective on batch processes and summarize the existing two-dimensional (2D) perspectives on batch processes, i.e., modeling, monitoring, control, and optimization methods. Based on those results, the spotlight will focus on how big data can be used to achieve smart batch processes using the 3D perspective. This will include detailed discussions of definitions and concepts, operational mechanisms, and the benefits and advantages of smart batch processes. For further implementation of the 3D perspective, we present several monitoring and control methodologies. Next, we analyze several challenges and issues in implementing smart batch processes in the era of Big Data. In conclusion, we provide both a novel viewpoint and encouragement for future research into batch process automation from the 3D perspective.
Article
Full-text available
Real-time monitoring systems are important for industry since they allow for avoiding unplanned system stops and keeping system availability high. The technical requirements for such systems include being both scalable and online, as the amount of generated data is increasing with time. Therefore, monitoring systems must integrate tools that can manage and analyze the data streams. The data stream management system is a stream processing tool that has the ability to manage and support operations on data streams in real-time. Several researchers have proposed and tested real-time monitoring systems which have the ability to search big data streams. In this paper, the research works that discuss the analysis of online data streams for fault detection in industry are reviewed. Based on the literature analysis, the industrial needs and challenges of monitoring big data streams are presented. Furthermore, feasible suggestions for improving the real-time monitoring system are proposed.
Article
Roasting is the first procedure in the zinc smelting process. The stable and safe operation of roasting process is significant to guarantee the quality of output zinc and reduce industrial pollution and energy consumption. In order to realize safe and stable operation for the roasting process, it is particularly important to monitor the roasting process accurately. However, the first principles model of roasting process is complex with coupled physical and chemical reactions, and the normal operating conditions may transfer to abnormal conditions such as over-decomposition, under-oxidation, fluidized bed deposition, etc., due to fluctuation of raw material composition. On the other hand, the data-driven process monitoring method will suffer from unbalanced data volume between different operating conditions, especially for the abnormal conditions, of which data are always insufficient. In order to address these problems and achieve accurate process monitoring for zinc smelting roasting process, this paper proposed a hybrid first principles and data-driven process monitoring method. In detail, an integrated principal component analysis and common subspace learning (PCA-CSL) method is first proposed to address the problem that different operating condition always has large divergence and imbalanced data volume. Here, the PCA algorithm is established for the operating conditions with sufficient data. On the contrary, for the operating condition with insufficient data, a CSL algorithm is proposed, which uses the operating condition with sufficient data to assist modeling for the operating condition with insufficient data in a common subspace, so as to realize accurate fault detection. Finally, an operating condition decision rule library is established based on integrated first principles and data-driven approach, and the parameters of decision rule were optimized according to particle swarm optimization (PSO) method to realize rule-based reasoning (RBR) based abnormal condition diagnosis. Extensive experiments are implemented to verify the effectiveness of the proposed method.
Article
Process dynamic behaviors resulting from closed-loop control and the inherence of processes are ubiquitous in industrial processes and bring a considerable challenge for process monitoring. Many methods have been developed for dynamic process monitoring, of which the dynamic latent variables (DLV) model is one of the most practical and promising branches. This paper provides a timely retrospective study of typical methods to fill the void in the systematic analysis of DLV methods for dynamic process monitoring. First, several classical DLV methods are briefly reviewed from three aspects, including original ideas, the determination of parameters, and offline statistics design. Second, a discussion on the relationships of the discussed methods has been established to make a clear understanding of process dynamics explained by each method. Third, five cases of a three-phase flow process are provided to illustrate the effectiveness of the methods from the application viewpoint. Finally, future research directions on dynamic process monitoring have also been provided. The primary objective of this paper is to summarize the prevalent DLV methods for dynamic process monitoring and thus highlight a valuable reference for further improvement on DLV models and the selection of algorithms in practical applications.
Article
As a typical process monitoring method for the large‐scale industrial process, the distributed principal components analysis (DPCA) needs to be improved because of its rough selection for the variables in each subblock. Moreover, for DPCA, the process dynamic property is ignored and invalid fault diagnosis may occur. Therefore, a performance‐driven distributed canonical variate analysis (DCVA) is proposed. Firstly, with historical fault information, the genetic algorithm is utilized to select appropriate variables for each subblock; secondly, canonical variate analysis is introduced to capture the dynamic information for performance improvement; finally, a novel fault diagnosis method is developed for the DCVA model. Case studies on a numerical example and the Tennessee Eastman benchmark process demonstrate the effectiveness of the proposed model. Highlights • A novel fault diagnosis approach based on the distributed CVA model is presented. • The dynamic property of process data for each sub‐block is firstly captured by CVA. • With the genetic algorithm, the historical fault information is utilized to select appropriate variables in each sub‐block. • The superiority of the developed method is validated on the numerical example and the TE process.
Article
Fault detection and diagnosis (FDD) systems are developed to characterize normal variations and detect abnormal changes in a process plant. It is always important for early detection and diagnosis, especially in chemical process systems to prevent process disruptions, shutdowns, or even process failures. However, there have been only limited reviews of data-driven FDD methods published in the literature. Therefore, the aim of this review is to provide the state-of-the-art reference for chemical engineers and to promote the application of data-driven FDD methods in chemical process systems. In general, there are two different groups of data-driven FDD methods: the multivariate statistical analysis and the machine learning approaches, which are widely accepted and applied in various industrial processes, including chemicals, pharmaceuticals, and polymers. Many different multivariate statistical analysis methods have been proposed in the literature, such as principal component analysis, partial least squares, independent component analysis, and Fisher discriminant analysis, while the machine learning approaches include artificial neural networks, neuro-fuzzy methods, support vector machine, Gaussian mixture model, K-nearest neighbor, and Bayesian network. In the first part, this review intends to provide a comprehensive literature review on applications of data-driven methods in FDD systems for chemical process systems. In addition, the hybrid FDD frameworks have also been reviewed by discussing the distinct advantages and various constraints, with some applications as examples. However, the choice for the data-driven FDD methods is not a straightforward issue. Thus, in the second part, this paper provides a guideline for selecting the best possible data-driven method for FDD systems based on their faults. Finally, future directions of data-driven FDD methods are summarized with the intent to expand the use for the process monitoring community.
Book
Use of a membrane within a bioreactor (MBR), either microbial or enzymatic, is a technology that has existed for 30 years to increase process productivity and/or facilitate the recovery and the purification of biomolecules. Currently, this technology is attracting increasing interest in speeding up the process and in better sustainability. In this work, we present the current status of MBR technologies. Fundamental aspects and process design are outlined and emerging applications are identified in both aspects of engineering, i.e., enzymatic and microorganism (bacteria, animal cells, and microalgae), including microscale aspects and wastewater treatment. Comparison of this integrated technology with classical batch or continuous bioreactors is made to highlight the performance of MBRs and identify factors limiting their performance and the different possibilities for their optimization.
Thesis
Le travail de thèse, qui s’inscrit dans le cadre du projet européen PAPYRUS (Plug and Play monitoring and control architecture for optimization of large scale production processes) du 7ème PCRD, a concerné tout d’abord la synthèse et la mise en oeuvre d’une approche de modélisation, de diagnostic et de reconfiguration originale. Celle-ci se fonde sur la génération de graphes causaux permettant de modéliser en temps réel le comportement d’un système complexe dans un premier temps. La cible de cette première étude a été la papeterie Stora Enso d’Imatra en Finlande, qui était le procédé d’application du projet PAPYRUS. En suite logique à cette première partie, une approche permettant l’accommodation du système à certains défauts particuliers a été définie par l’ajustement des signaux de consigne de diverses boucles de régulation. Le manuscrit est structuré en trois parties. La première partie est dédiée à la présentation du projet européen PAPYRUS. Le rôle de chaque partenaire y est décrit au travers des différents « workpackages » et le travail de thèse y est positionné.La seconde partie de la thèse a pour objectif la génération d’un modèle utile au diagnostic en se fondant uniquement sur les différents signaux mesurés du système. Plus précisément, un modèle causal graphique est présenté par la mise en évidence des liens de causalité entre les différentes variables mesurées. Des analyses d’inter-corrélation, de transfert d’entropie et du test de causalité de Granger sont effectuées. Une approche de diagnostic fondée sur le modèle graphique ainsi obtenu est ensuite proposée en utilisant un test d’hypothèse séquentiel. La dernière partie est dédiée au problème d’accommodation aux défauts. Le graphe utilisé pour établir le diagnostic du système est remanié afin de faire apparaitre les différentes boucles de régulation du système. Une stratégie permettant la sélection de consignes influentes est alors proposée avec l’objectif d’ajuster ces dernières afin de compenser l’effet du défaut survenu.
Article
Six sigma has been widely adopted in a variety of industries as a proven management innovation methodology to produce high-quality products and reduce the cost at all the levels of an enterprise. However, in case of process industries, the application of six sigma activities has had only limited successes. This paper proposes a plant operation system which can guide plant engineers and operators to pursue six sigma activities by providing supports for key elements of six sigma: measurement, analysis, improvement and control. Multivariate statistical process control (MSPC) techniques have been employed as key technologies for the system, along with the plant infonnation systems. This paper also discusses the future research issues that should be addressed to implement the described system.
Conference Paper
Full-text available
Conventional fault location method based on signed directed graph (SDG) can reflect the occurrence of the fault by node state. However, it is unable to explain the severity of fault. The conventional method has low inference efficiency due to the complexity of model structure, and it has a difficulty in identifying the real fault origin. In order to solve the above-metioned problems, a hybrid fault location method is developed from three aspects (model, structure and inference). Firstly, an improved five-range SDG model based on fuzzy set theory is proposed to reveal potential cause relationship of variables, to increase quantitative information and to reflect the harm extent of fault. Moreover, a structure simplification method is used to reduce model complexity, and the set of fault origin candidates can be obtained subsequently. In addition, the consistent path priority which includes the fault magnification, the fault support degree and the fault propagation speed is developed to identify fault origin from the candidates. Finally, the reactor of Tennessee Eastman process is modeled with the proposed method, and the result shows that this method has excellent fault location capacity in fault diagnosis.
Article
A novel architecture for a real-time Utilities Consumption Model (UCM) has been developed. The online UCM is capable of estimating the contributions from individual items of equipment towards the total instantaneous load of key utilities in a manufacturing plant. It also has the capability to forecast future consumption for areas of a plant that are scheduled. The UCM is a useful addition to the industrial control tool-set as it provides an effective means of minimising the energy impact of the timing and scheduling aspects of plant operations. A case study, demonstrating the application of the UCM at the Carlton & United Breweries (CUB) plant at Yatala, Australia, is included.
Article
To solve complex constrained optimization problems of fault diagnosis, a new immune-genetic algorithm based on artificial neural networks is proposed in this paper. This paper emphasizes a technique for early fault detection and diagnosis, which are important in the processing industries. In this method, firstly, some antigen is randomly generated antibody production and training. An effective immune system, awareness of self and non-self antigens in support, is trained with regularity of antibody. An efficient immune system with the capability to recognize self- and non-self-antigens is supported by these trained antibodies. The resulting immune system is built into genetic algorithms, and they can be used to identify and repair the illegal and infeasible chromosomes during the genetic iterations. Then, all the feature vectors of the training samples are then put into the ANN neural network to train the network. The output layer is clustered into several regions, with each region corresponding to a fault. Finally, new samples were added to the trained ANN network so faults could be recognized according to regions based on the location of the output neuron. Experimental results indicate that diagnosis accuracy with the proposed method is higher than what can be achieved using immune-genetic algorithm based on artificial neural networks and the diagnostic results also have high visibility.
Article
In most of the process monitoring, it is assumed that the observations from the process output are independent and identically distributed. But for many processes, the observations are correlated, and when this correlation build-up automatically in the entire process, it is known as autocorrelation. Autocorrelation among the observations can have significant effect on the performance of a control chart. The detection of special cause/s in the process may become very difficult in such situations. Several types of control charts and their combinations are evaluated for their ability to detect changes in the process mean and variance, since two decades. To counter the effect of autocorrelation, various new methodologies and approaches such as double sampling, variable sample sizes and sampling intervals, etc. are suggested by various researchers. Researchers also used Markov chain, time-series approach, MATLAB and artificial neural networks for the simulation of the data. This paper provides a survey and brief summary of the work on the development of the control charts for variables to monitor the mean and dispersion for autocorrelated data.
Article
Tennessee Eastman (TE) process is a typical multivariate chemical process. It has some characteristics of complexity and nonlinearity. Therefore, it is an ideal research platform substituted for the real industrial process whose data is difficult to be achieved. Many scholars have done a lot of studies on monitoring approaches and applied these methods on the platform. However, it is not an easy work to obtain some ideal simulation results on detecting some special faults in TE process, such as the fault 3. In this paper, an integration of canonical variate analysis and independent component analysis method (CV-ICA) is proposed. It combines the advantages of canonical variate analysis (CVA) and independent component analysis (ICA) to solve these problems. CV-ICA applies CVA to calculate the canonical variates from the process data, and then employs ICA to extract independent components (ICs). The monitoring simulation demonstrates the availability of the proposed method.
Article
In recent times the ever-present drive to minimise the cost of production has been compounded with a strong focus on energy efficiency and environmental footprint. Previous activities such as updating old equipment with energy efficient replacements and plant maintenance & optimisation have all improved the cost of production to an extent. However an efficient plant doesn't guarantee efficient production. Intelligent systems can be used to monitor and improve the efficiency of the brewery. G2 is one of the world's leading real-time intelligent system development platforms. It is used by a significant number of the top Fortune 500 companies for a range of applications including control, scheduling, simulation, alarming and rule-based control. At the Foster's brewery in Yatala, Queensland we have implemented a large G2 system that has been configured to monitor both the plant and process control equipment. It monitors over 100,000 tags from 450 devices and it provides assistance in a number of areas including energy management and utilities & chemical consumption. A module for operator decision support has also been imple- mented for key areas of the brewery. In this module G2 presents a number of automatically prioritised recommendations to the operators in real-time. This system has helped us to both reduce production costs and to improve the reliability of the plant.
Article
An overview on fault diagnosis methodologies for process operation is given with a view to development of computer aided tools. First, different features of automated fault diagnosis are defined and explained, followed by explanation of different types and classes of methodologies.
Article
Purpose The purpose of this paper is the development of an empirically based typology of condition based maintenance (CBM) approaches, including the relevant characteristics and requirements. Design/methodology/approach An exploratory case study was conducted in a major gas production facility. The CBM typology that resulted from this case study was subsequently tested against a large set of CBM literature. Findings In the literature, CBM is usually presented as a single theory or practice. The paper finds that CBM in fact includes several different approaches and that each of the approaches is only suitable in situations where the specific characteristics of the approach match the situational characteristics. Aided by these findings, a new typology for CBM was developed. The typology is based on the method for obtaining the expected value, or trend (through statistical vs analytical modeling) and the type of data used (process vs failure data). A subsequent literature survey reveals that the proposed typology is applicable for the categorization of a large number of CBM cases found in the literature. Practical implications One of the most important requirements in selecting and using a CBM approach is the availability and integration of various types of knowledge, in particular process engineering and maintenance engineering knowledge. Practitioners can use these insights to assess current CBM cases, and identify the key characteristics of current and future use of various CBM types. Originality/value This paper presents a novel and empirically based framework for the classification of the different CBM types. Such frameworks were lacking in the current literature. The paper adds to maintenance engineering literature by identifying the key dimensions of the various types along with their key requirements.
Article
An empirical model-based framework for monitoring and diagnosing batch processes is proposed. With the input of past successful and unsuccessful batches, the off-line portion of the framework constructs empirical models. Using online process data of a new batch, the online portion of the framework makes monitoring and diagnostic decisions in a real-time basis. The proposed framework consists of three phases: monitoring, diagnostic screening, and diagnosis. For monitoring and diagnosis purposes, the multiway principal-component analysis (MPCA) model and discriminant model are adopted as reference models. As an intermediate step, the diagnostic screening phase narrows down the possible cause candidates of the fault in question. By analysing the MPCA monitoring model, the diagnostic screening phase constructs a variable influence model to screen out unlikely cause candidates. The performance of the proposed framework is tested using a real dataset from a PVC batch process. It has been shown that the proposed framework produces reliable diagnosis results. Moreover, the inclusion of the diagnostic screening phase as a pre-diagnostic step has improved the diagnosis performance of the proposed framework, especially in the early time intervals.
Article
Multivariate statistical process monitoring (SPM), and fault detection and diagnosis (FDD) methods are developed to monitor the critical control points (CCPs) in a continuous food pasteurization process. Multivariate SPM techniques effectively use information from all process variables to detect abnormal process behavior. Fault diagnosis techniques isolate the source cause of the deviation in process variable(s). The methods developed are illustrated by implementing them to monitor the critical control points and diagnose causes of abnormal operation of a high temperature short time (HTST) pasteurization pilot plant. The detection power of multivariate SPM and FDD techniques over univariate SPM techniques is shown and their integrated use to ensure the product safety and quality in food processes is demonstrated.
Article
An adaptive agent-based hierarchical framework for fault type classification and diagnosis in continuous chemical processes is presented. Classification techniques such as Fisher’s discriminant analysis (FDA) and partial least-squares discriminant analysis (PLSDA) and diagnosis tools such as variable contribution plots are used by agents in this supervision system. After an abnormality is detected, the classification results reported by different diagnosis agents are summarized via a performance-based criterion, and a consensus diagnosis decision is formed. In the agent management layer of the proposed system, the performances of diagnosis agents are evaluated under different fault scenarios, and the collective performance of the supervision system is improved via performance-based consensus decision and adaptation. The effectiveness of the proposed adaptive agent-based framework for the classification of faults is illustrated using a simulated continuous stirred tank reactor (CSTR) network.
Article
A fault diagnosis system is developed by integrating principal component analysis (PCA) with fuzzy logic knowledge-based (FLKB) systems. A PCA model is created using normal state data. Then it is used to project normal and faulty data during the training stage. It evaluates the process variables values and their correlation, allowing fast and reliable fault detection. Once detection is performed, a FLKB system is used to evaluate the contributions of each variable to changes in the process, finding the root causes of the abnormal event detected. A simple methodology to automatically extract compact process information is presented. Then, an optimization algorithm is implemented to improve the isolation performance. The methodology is demonstrated in an academic case study and in the Tennessee Eastman process benchmark.
Article
Product quality and operation safety are important aspects of industrial processes, particularly those with large numbers of correlated process variables. Principal component analysis (PCA) has been widely used in multivariate process monitoring for its ability to reduce process dimensions. PCA and other statistical techniques, however, have difficulties in differentiating faults with similar time-domain process characteristics. A wavelet-based time-frequency approach is developed in this paper to improve PCA-based methods by extending the time-domain process features into time-frequency information. Subsequently, a similarity measure is presented to compare process features for on-line process monitoring and fault diagnosis. Simulation results show that the proposed multivariate time-frequency process feature is effective in both fault detection and diagnosis, illustrating the potentials for real-world application.
Article
An adaptive hierarchical framework for process supervision and fault-tolerant control with agent-based systems is presented. The framework consists of modules for fault detection and diagnosis (FDD), system identification and distributed control, and a hierarchical structure for performance-based agent adaptation. Multivariate continuous process monitoring methodologies and several fault discrimination and classification techniques are implemented in the FDD modules to be used by multiple agents. In the process supervision layer, the continuous intramodular communication between FDD and control modules communicates the existence of an abnormality in the process, type of the abnormality, and affected process sections to the distributed model predictive control agents. In the agent management layer, the performances of all FDD and control agents are evaluated under specific process conditions. Performance-based consensus criteria are used to prioritize the best-performing agents in consensus decision making in every level of process supervision and fault-tolerant control. The collective performance of the supervision system is improved via performance-based consensus decision making and adaptation. The effectiveness of the proposed adaptive agent-based framework for fault-tolerant control is illustrated using a simulated continuous stirred-tank reactor network. Copyright © 2011 John Wiley & Sons, Ltd.
Article
In this paper we discuss the basic procedures for the implementation of multivariate statistical process control via control charting. Furthermore, we review multivariate extensions for all kinds of univariate control charts, such as multivariate Shewhart-type control charts, multivariate CUSUM control charts and multivariate EWMA control charts. In addition, we review unique procedures for the construction of multivariate control charts, based on multivariate statistical techniques such as principal components analysis (PCA) and partial least squares (PLS). Finally, we describe the most significant methods for the interpretation of an out-of-control signal. Copyright © 2006 John Wiley & Sons, Ltd.
Article
Two dynamic grey models DGM (1, 1) for the verification cycle and the lifecycle of measuring instrument based on time sequence and frequency sequence were set up, according to the statistical feature of examination data and weighting method. By a specific case, i. e. vernier caliper, it is proved that the fit precision and forecast precision of the models are much higher, the cycles are obviously different under different working conditions, and the forecast result of the frequency sequence model is better than that of the time sequence model. Combining dynamic grey model and auto-manufacturing case the controlling and information subsystems of verification cycle and the lifecycle based on information integration, multi-sensor controlling and management controlling were given. The models can be used in production process to help enterprise reduce error, cost and flaw.
Article
Multivariate statistical process control (MSPC) can be applied for condition monitoring (CM) purposes. MSPC is implemented using a variety of techniques including neural networks (NNs). In situations when the number of process attributes is sufficiently large (e.g. 10 or more) concerns can arise with respect to training of NNs for pattern recognition. A classification method known as novelty detection (ND) can provide an effective alternative to conventional NN solutions that suffer from the above problem. Despite its great potential, ND is still unknown to the broad community of manufacturing engineers. This paper successfully demonstrates the ability of ND, using Gaussian mixture models, to notify operators of an end-milling process of the presence of faulty tool conditions. A significant achievement is that ND is used to identify abnormal time-series patterns as opposed to individual vectors of multiple simultaneous measurements related to abnormal conditions. Such patterns are found in windowed streams of signals related to 10 different process features (with an effective problem dimension of 140). The paper also investigates some of the issues related to implementation of ND for pattern recognition in condition monitoring .
Conference Paper
Full-text available
Two new subspace algorithms for identifying mixed deterministic-stochastic systems are derived. Both algorithms determine state sequences through the projection of input and output data. These state sequences are shown to be outputs of nonsteady-state Kalman filter banks. From these it is easy to determine the state space system matrices. The algorithms are always convergent (noninterative) and numerically stable since they only make use of QR and singular value decompositions. The two algorithms are similar, but the second one trades off accuracy for simplicity. An example involving a glass oven is considered
Article
Full-text available
We consider the notion of qualitative information and the practicalities of extracting it from experimental data. Our approach, based on a theorem of Takens, draws on ideas from the generalized theory of information known as singular system analysis due to Bertero, Pike and co-workers. We illustrate our technique with numerical data from the chaotic regime of the Lorenz model.
Article
Industrial continuous processes may have a large number of process variables and are usually operated for extended periods at fixed operating points under closed-loop control, yielding process measurements that are autocorrelated, cross-correlated, and collinear. A statistical process monitoring (SPM) method based on multivariate statistics and system theory is introduced to monitor the variability of such processes. The statistical model that describes the in-control variability is based on a canonical-variate (CV) state-space model that is an equivalent representation of a vector autoregressive moving-average time-series model. The CV state variables obtained from the state-space model are linear combinations of the past process measurements that explain the variability of the future measurements the most. Because of this distinctive feature, the CV state variables are regarded as the principal dynamic directions A T2 statistic based on the CV state variables is used for developing an SPM procedure. Simple examples based on simulated data and an experimental application based on a high-temperature short-time milk pasteurization process illustrate advantages of the proposed SPM method.
Article
To ensure safe operation of continuous processes and to produce consistently high quality products, it is important to monitor process performance in real time. Since traditional analytical instruments are usually expensive to install, a process model can be used to monitor process behavior. In this paper, a monitoring approach utilizing multi-way principal component analysis (MPCA) is studied. The method overcomes the assumption that the system is at steady state and it provides a predictive monitoring approach for continuous processes. The proposed approach using MPCA models can predict faults in advance of traditional monitoring approaches. A multi-block extension of the basic MPCA method is presented. The main focus of this paper is on the monitoring of multi-block continuous dynamic processes. The Tennessee Eastman process is used for illustrating the new approach.
Article
Statistical process control methods for monitoring processes with multivariate measurements in both the product quality variable space and the process variable space are considered. Traditional multivariate control charts based on X2 and T2 statistics are shown to be very effective for detecting events when the multivariate space is not too large or ill-conditioned. Methods for detecting the variable(s) contributing to the out-of-control signal of the multivariate chart are suggested. Newer approaches based on principal component analysis and partial least squares are able to handle large ill-conditioned measurement space; they also provide diagnostics which can point to possible assignable causes for the event. The me hods are illustrated on a simulated process of a high pressure low density polyethylene reactor, and examples of their application to a variety of industrial processes are referenced.
Chapter
Any data table produced in a chemical investigation can be analysed by bilinear projection methods, i. e. principal components and factor analysis and their extensions. Representing the table rows (objects) as points in a p-dimensional space, these methods project the point swarm of the data set or parts of it down on a F-dimensional subspace (plane or hyperplane). Different questions put to the data table correspond to different projections. This provides an efficient way to convert a data table to a few informative pictures showing the relations between objects (table rows) and variables (table columns). The methods are presented geometrically and mathematically in parallell with chemical illustrations. more dangerous in the long run than methods that are conservative with respect to the amount of extracted information.
Article
Traditionally, control charts are developed assuming that the sequence of process observations to which they are applied are uncorrelated. Unfortunately, this assumption is frequently violated in practice. The presence of autocorrelation has a serious impact on the performance of control charts, causing a dramatic increase in the frequency of false alarms. This paper presents methods for applying statistical control charts to autocorrelated data. The primary method is based on modeling the autocorrelative structure in the original data and applying control charts to the residuals. We show that the exponentially weighted moving average (EWMA) statistic provides the basis of an approximate procedure that can be useful for autocorrelated data. Illustrations are provided using real process data.
Article
Detecting out-of-control status and diagnosing disturbances leading to the abnormal process operation early are crucial in minimizing product quality variations. Multivariate statistical techniques are used to develop detection methodology for abnormal process behavior and diagnosis of disturbances causing poor process performance. Principal components and discriminant analysis are applied to quantitatively describe and interpret step, ramp and random-variation disturbances. All disturbances require high-dimensional models for accurate description and cannot be discriminated by biplots. Diagnosis of simultaneous multiple faults is addressed by building quantitative measures of overlap between models of single faults and their combinations. These measures are used to identify the existence of secondary disturbances and distinguish their components. The methodology is illustrated by monitoring the Tennessee Eastman plant simulation benchmark problem subjected to different disturbances. Most of the disturbances can be diagnosed correctly, the success rate being higher for step and ramp disturbances than random-variation disturbances.
Article
The dynamics of semibatch reactors is often neglected in the literature despite their industrial importance. This article analyzes the stability and dynamic behavior of semibatch polymerization reactors operated according to a flow scheduling strategy designed to impart a steady-state nature to the dynamics of these essentially transient reactors. It can also improve the operation of these reactors, especially the quality of the polymer produced (such as molecular weight distribution and polymer composition). In the proposed strategy of flow rate scheduling, the intensive states of the reactor can be made to reach steady-state values. A comparison of the dynamics of the proposed and classical operating strategies illustrates this possibility. Further dynamic analysis reveals the emergence of phenomena characteristic of continuous operation in a CSTR. Examples are multiplicity of the trajectories, limit cycle oscillations, as well as nonhomogeneous oscillations belonging to a period doubling cascade. Operation in a sequential semibatch mode is discussed, as well as the importance of selecting parameters for the fillup and discharge operation. In this mode of operation either periodic or chaotic behavior is obtained. The effect of both on polymer properties is studied in detail.
Article
Measurements from industrial processes are often serially correlated. The impact of this correlation on the performance of the cumulative sum and exponentially weighted moving average charting techniques is investigated in this paper. It is shown that serious errors concerning the “state of statistical process control” may result if the correlation structure of the observations is not taken into account. The use of time series methods for coping with serially correlated observations is outlined. Paper basis weight measurements are used to illustrate the time series methodology.Les mesures prises sur des procédés industriels sont souvent corrélées en série. Nous étudions dans cet article l'impact de ce type de corrélation sur la performance de la somme cumulative et sur les techniques de présentation de la moyenne mobile pondérée exponentiellement. On montre qu'il peut exister des erreurs importantes concernant l'«état du contrôle de procédé statistique» Si la structure de corrélation des observations n'est pas prise en considération. On décrit l'utilisation des méthodes de séries chronologiques pour des observations corrélées en série. La méthode des séries chronologiques est illustrée avec des mesures de poids de base de papier.
Conference Paper
Very general reduced order filtering and modeling problems are phased in terms of choosing a state based upon past information to optimally predict the future as measured by a quadratic prediction error criterion. The canonical variate method is extended to approximately solve this problem and give a near optimal reduced-order state space model. The approach is related to the Hankel norm approximation method. The central step in the computation involves a singular value decomposition which is numerically very accurate and stable. An application to reduced-order modeling of transfer functions for stream flow dynamics is given.
Article
In this paper we extend previous work by ourselves and other researchers in the use of principal component analysis (PCA) for statistical process control in chemical processes. PCA has been used by several authors to develop techniques to monitor chemical processes and detect the presence of disturbances [1–5]. In past work, we have developed methods which not only detect disturbances, but isolate the sources of the disturbances [4]. The approach was based on static PCA models, T2 and Q charts [6], and a model bank of possible disturbances. In this paper we use a well-known ‘time lag shift’ method to include dynamic behavior in the PCA model. The proposed dynamic PCA model development procedure is desirable due to its simplicity of construction, and is not meant to replace the many well-known and more elegant procedures used in model identification. While dynamic linear model identification, and time lag shift are well known methods in model building, this is the first application we are aware of in the area of statistical process monitoring. Extensive testing on the Tennessee Eastman process simulation [7] demonstrates the effectiveness of the proposed methodology.
Article
For safety and product quality, it is important to monitor process performance in real time. Since traditional analytical instruments are usually expensive to install, a process model can be used instead to monitor process behavior. In this paper, a monitoring approach using a multivariate statistical modeling technique, namely multi-way principal component analysis (MPCA), is studied. The method overcomes the assumption that the system is at steady state and it provides a real time monitoring approach for continuous processes. The monitoring approach using MPCA models can detect faults in advance of other monitoring approaches. Several issues which are important for the proposed approach, such as the model input structure, data pretreatment, and the length of the predictive horizon are discussed. A multi-block extension of the basic methodology is also treated and this extension is shown to facilitate fault isolation. The Tennessee Eastman process is used for demonstrating the power of the new monitoring approach.
The dynamic behaviour of free radical polymerisation reactions in a continuous stirred tank reactor
  • F Teymour
F. Teymour, The dynamic behaviour of free radical polymerisa-tion reactions in a continuous stirred tank reactor. Ph.D. thesis, University of Wisconsin, Madison, 1989.
SjoÈ stroÈ m. Multivariative data analysis in chemistry
  • S Geladi
  • E Hellberg
  • W Johansson
  • M Lindberg
Geladi, S. Hellberg, E. Johansson, W. Lindberg, M. SjoÈ stroÈ m. Multivariative data analysis in chemistry, in: B.R. Kowalski (Ed.), Chemometrics: Mathematics and Statistics in Chemistry, Reidel, Dordrecht, 1984.
N4sid: subspace algorithms for the identification of combined deterministic-stochastic systems
  • Ogverschee
Extracting qualitative dynamics from experimental data
  • Broomhead
Contribution plots: the missing link in multivariative quality control. Presented at the 1993 Annual Fall Technical Conference of ASQC
  • P Miller
  • R E Swanson
  • C F Heckler