Article

An intelligent system for multivariate statistical process monitoring and diagnosis

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A knowledge-based system (KBS) was designed for automated system identification, process monitoring, and diagnosis of sensor faults. The real-time KBS consists of a supervisory system using G2 KBS development software linked with external statistical modules for system identification and sensor fault diagnosis. The various statistical techniques were prototyped in MATLAB, converted to ANSI C code, and linked with the G2 Standard Interface. The KBS automatically performs all operations of data collection, identification, monitoring, and sensor fault diagnosis with little or no input from the user. Navigation throughout the KBS is via menu buttons on each user-accessible screen. Selected process variables are displayed on charts showing the history of the variables over a period of time. Multivariate statistical tests and contribution plots are also shown graphically. The KBS was evaluated using simulation studies with a polymerization reactor through a nonlinear dynamic model. Both normal operation conditions as well as conditions of process disturbances were observed to evaluate the KBS performance. Specific user-defined disturbances were added to the simulation, and the KBS correctly diagnosed both process and sensor faults when present.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Numerous types of industrial process measurement data have been collected as a result of the rapid development of computing technologies and the gradual integration of industrial processes. Thus, significant progress has been made in data-driven industrial fault detection techniques, of which multivariate statistical process monitoring (MSPM) is an important branch [4][5][6][7]. The central concept of MSPM is to project high-dimensional process data onto a low-dimensional set of latent variables and then to develop statistical indicators for fault detection [8]. ...
... The SAE training procedure consists of two phases: layer-by-layer pre-training and fine-tuning. In the initial stages, each unit of the SAE's hidden layer is trained by minimizing the loss function using Equation (5). For the first hidden layer, the input is the raw data, and the AE is trained to obtain the parameters {W 1 , b 1 }; for the second hidden layer, the input is the first hidden unit and the parameters {W 2 , b 2 } are obtained; and for the k th hidden layer, the input is the (k − 1) th hidden unit and the parameters {W k , b k } are obtained. ...
... The findings obtained by monitoring with multiple MSPM and deep network approaches, including PCA, KPCA [24], AE [30], M-DBN [24], LE-DBN [31], SAE, and SiSAE are presented. 4,5,6,7,10,11,14,16,19, and 20, our technique produced the highest DR, and the detection performance was significantly enhanced, particularly for faults 11, 19, 20. In addition, the average DR was the highest, with our method demonstrating a low FAR, as shown in Table 2. ...
Article
Full-text available
Due to the growing complexity of industrial processes, it is no longer adequate to perform precise fault detection based solely on the global information of process data. In this study, a silhouette stacked autoencoder (SiSAE) model is constructed for process data by considering both global/local information and silhouette information to depict the link between local/cross-local. Three components comprise the SiSAE model: hierarchical clustering, silhouette loss, and the joint stacked autoencoder (SAE). Hierarchical clustering is used to partition raw data into many blocks, which clarifies the information’s characteristics. To account for silhouette information between data, a silhouette loss function is constructed by raising the inner block’s data distance and decreasing the distance of the cross-center block. Each data block has a properly sized SAE model and is jointly trained via silhouette loss to extract features from all available data. Using the Tennessee Eastman (TE) benchmark and semiconductor industrial process data, the proposed method is validated. Comparative tests on the TE benchmark indicate that the average rate of fault identification increases from 75.8% to 83%, while the average rate of false detection drops from 4.6% to 3.9%.
... With the rapid development of new sensors and data gathering equipment, multivariate statistical process monitoring (MSPM) methods, have progressed quickly in recent decades [1][2][3][4][5][6][7]. Among these MSPM methods, principal component analysis (PCA) usually serves as the most fundamental one and has been researched a lot [8][9][10][11][12][13][14][15]. ...
... Step IDV (5) Condenser cooling water inlet temperature ...
Article
Full-text available
Multiblock principal component analysis (MBPCA) methods are gaining increasing attentions in monitoring plant-wide processes. Generally, MBPCA assumes that some process knowledge is incorporated for block division; however, process knowledge is not always available. A new totally data-driven MBPCA method, which employs mutual information (MI) to divide the blocks automatically, has been proposed. By constructing sub-blocks using MI, the division not only considers linear correlations between variables, but also takes into account non-linear relations thereby involving more statistical information. The PCA models in sub-blocks reflect more local behaviors of process, and the results in all blocks are combined together by support vector data description. The proposed method is implemented on a numerical process and the Tennessee Eastman process. Monitoring results demonstrate the feasibility and efficiency.
... Cinar et al have successfully combined multivariate statistical data analysis with expert systems for process fault diagnosis. Basically the multivariate statistical data analysis module developed in MATLAB was converted into C code, and then linked with G2 expert system through a G2 standard interface (GSI) link [32], [34]. ...
... Cinar et al have successfully combined multivariate statistical data analysis with expert systems for process fault diagnosis. Basically the multivariate statistical data analysis module developed in MATLAB was converted into C code, and then linked with G2 expert system through a G2 standard interface (GSI) link [32], [34]. Cinar et al exploited only the G2 diagnostic assistant (GDA) capability (i.e., a graphical design tool similar to Simulink/MATLAB). ...
Conference Paper
Full-text available
Intelligent control and asset management for the petroleum industry is crucial for profitable oil and gas facilities operation and maintenance. A research program was initiated to study the feasibility of an intelligent asset management system for the offshore oil and gas industry in Atlantic Canada. The research program has achieved several milestones. The conceptual model of an automated asset management system, its architecture, and its behavioral model have been defined (1, 2). Furthermore, an implementation plan for such system has been prepared, and the appropriate development tools have been chosen (3). A system reactive agent structure was defined based on the MATLAB environment, and its communication requirements were analyzed and validated (31). This paper builds on the previous work and proposes a general structure of the ICAM system intelligent supervisory agent and its software implementation. We also describe the software implementation using the G2 expert system development environment. Furthermore, we analyze and define the autonomy requirements of the reactive agents of such system. Asset management and control of modern process plants involve many tasks of different time-scales and complexity including data reconciliation and fusion, fault detection, isolation, and accommodation (FDIA), process model identification and optimization, and supervisory control. The automation of these complementary tasks within an information and control infrastructure will reduce maintenance expenses, improve utilization and output of manufacturing equipment, enhance safety, and improve product quality. Many research studies proposed different combinations of systems theoretic and artificial intelligence techniques to tackle the asset management problem, and delineated the requirements of such system (4), (5), (6).
... Ref. [25] presented IntelliSPC, which identifies quality issues from online monitored data and associates them with plausible causes using pattern recognition derived from shop-floor variables. Ref. [26] introduced knowledge-based systems designed to identify process variations based on fault diagnosis from sensors automatically. ...
Article
Full-text available
Digital transformations in manufacturing systems confer advantages for enhancing competitiveness and ensuring the survival of companies by reducing operating costs, improving quality, and fostering innovation, falling within the overarching umbrella of Industry 4.0. This study aims to provide a framework for the integration of smart statistical digital systems into existing manufacturing control systems, exemplified with guidelines to transform an existent statistical process control into a smart statistical process control. Employing the design science research method, the research techniques include a literature review and interviews with experts who critically evaluated the proposed framework. The primary contribution lies in a set of general-purpose guidelines tailored to assist practitioners in manufacturing systems with the implementation of digital, smart technologies aligned with the principles of Industry 4.0. The resulting guidelines specifically target existing manufacturing plants seeking to adopt new technologies to maintain competitiveness. The main implication of the study is that practitioners can utilize the guidelines as a roadmap for the ongoing development and implementation of project management. Furthermore, the study paves the way for open innovation initiatives by breaking down the project into defined steps and encouraging individual or collective open contributions, which consolidates the practice of open innovation in manufacturing systems.
... This framework was extended to supervise multivariable system operation and used robust control techniques to retune or restructure the control system automatically (Kendra, Basila, & Cinar, 1994;. Multivariate statistical techniques were linked with KBS to integrate multivariate process monitoring and fault diagnostics by using G2 by Gensym, Inc. (Gensym, 1996), a commercial real-time KBS development system for process operations (Tatara & Cinar, 2002). This framework was extended to control system performance assessment and modification (Schäfer & Cinar, 2004). ...
Article
An adaptive-learning model predictive control (AL-MPC) framework is proposed for incorporating disturbance prediction, model uncertainty quantification, pattern learning, and recursive subspace identification for use in controlling complex dynamic systems with periodically recurring large random disturbances. The AL-MPC integrates online learning from historical data to predict the future evolution of the model output over a specified horizon and proactively mitigate significant disturbances. This goal is accomplished using dynamic regularized latent variable regression (DrLVR) approach to quantify disturbances from the past data and forecast their future progression time series. An enveloped path for the future behavior of the model output is extracted to further enhance the robustness of the closed-loop system. The controller set-point, penalty weights of the objective function, and constraints criteria can be modified in advance for the expected periods of the disturbance effects. The proposed AL-MPC is used to regulate glucose concentration in people with Type 1 diabetes by an automated insulin delivery system. Simulation results demonstrate the effectiveness of the proposed technique by improving the performance indices of the closed-loop system. The MPC algorithm integrated with DrLVR disturbance predictor has compared to MPC reinforced with dynamic principal component analysis linked with K-nearest neighbors and hyper-spherical clustering (k-means) technique. The simulation results illustrate that the AL-MPC can regulate the glucose concentrations of people with Type 1 diabetes to stay in the desired range (70–180) mg/dL 84.4% of the time without causing any hypoglycemia and hyperglycemia events.
... As the system becomes increasingly complex, the importance of discovering prob lems in time has received unprecedented attention from the academic and industrial world. With the rapid development of computer science and technology, multivariable statistical process monitoring (MSPM) has become the primary tool for application in this field [1][2][3][4][5][6][7][8]. MSPM, such as principal comp onent analysis (PCA) [9], partial least squares [10], indepen dent component analysis [11] and Fisher discriminant analysis [12], is suitable for massive data, and its extensions have been extensively applied to chemical production processes. ...
Article
Full-text available
Autoencoders and stacked autoencoders (SAEs) are efficient for detecting abnormal situations in process monitoring because of their powerful deep feature representation capability. However, SAEs are easy to overfit during training, thereby affecting this representation. Furthermore, several nodes of the same layer in the SAE carry duplicate information, and thus the features are strongly correlated. To solve these problems, a novel regularization strategy, in which the inner product is introduced, is proposed for the SAE to reduce overfitting more effectively. The modified SAE is called an inner product-based stacked autoencoder (IPSAE). SAEs aim to reduce the Euclidean distance between the output and input matrices through iterative calculation, whereas the IPSAE adds the inner products between the outputs of the neurons to the objective function to regularize the features and reduce feature redundancy. Hence, after determining the structure of the SAE, it is trained to lower the reconstruction error and inner product between the outputs of the neurons to improve the deep feature representation of the industrial process. The proposed model is applied to a numerical system and a Tennessee Eastman dataset, and demonstrates the best performance when compared with several state-of-the-art models.
... Under such a background, this study employs the deep belief network (DBN), and select the active features based on the ''activity degree'' which is developed in this study, to perform process monitoring and achieved a good performance. * Correspondence to: P. O Multivariate statistical process monitoring (MSPM) [1][2][3][4][5][6][7][8][9][10] has been extensively applied due to its data-based nature and has become an advanced research hotspot in recent years. Among the MSPM methods, principal component analysis (PCA) is the most fundamental and widely used technology [11][12][13][14][15]. ...
Article
Full-text available
Recently, based on the powerful capability of feature extraction, deep learning technique has been applied to the field of process monitoring, and usually, the researches utilize all the abstract features to establish the detection model and detect or classify the fault. However, whether all the extracted features are valid and beneficial for process monitoring have never been researched and discussed. If there are some features that are adverse for process monitoring, the detection performance of the model would be reduced once they are considered in the model, and utilized the features that are advantageous for process monitoring could ameliorate the performance of detection model. Motivated by this, a feasibility analysis on each feature captured by deep belief network for process monitoring is executed and the conception of active features (AFs) which have active expression for the occurrence of the fault is proposed. Based on AFs, utilized Euclidean metric to calculate the dissimilarity between the test sample and the training sample, and moving average technique is employed to reduce the effect of the burst noise in measurement variables on the result. Finally, the comparison of fault detection rate with other advanced methods on a numerical process and TE process demonstrate the feasibility and superiority of the proposed method, AF-DBN in this study.
... Later, the object-and rule-based hybrid KBS structures were developed. This enabled the use of class-object structures with inheritance to reduce the number of rules drastically and create an efficient reasoning system (Cinar et al. 2007;Tatara and Çinar 2002;Ündey et al. 2003). As the complexity of the system and problem increased, many rules were generated, necessitating a systematic search of rules (depth-first or breadth-first) and prioritizing of the importance of each rule to enable conflict resolution. ...
Chapter
Alarm systems warn people with T1D when hypoglycemia occurs or can be predicted to occur in the near future if the current glucose concentration trends continue. Various alarm system development strategies are outlined in this chapter. Severe hypoglycemia has significant effects ranging from dizziness to diabetic coma and death while long periods of hyperglycemia cause damage to the vascular system. Fear of hypoglycemia is a major concern for many people with T1D. High doses of exogenous insulin relative to food, activity and low blood glucose levels can precipitate hypoglycemia. Hypoglycemia and hyperglycemia early alarm systems would be very beneficial for people with T1D to warn them or their caregivers about the potential hypoglycemia and hyperglycemia episode before it happens and empowers them to take measures to prevent these events.
... Later, the object-and rule-based hybrid KBS structures were developed. This enabled the use of class-object structures with inheritance to reduce the number of rules drastically and create an efficient reasoning system (Cinar et al. 2007;Tatara and Çinar 2002;Ündey et al. 2003). As the complexity of the system and problem increased, many rules were generated, necessitating a systematic search of rules (depth-first or breadth-first) and prioritizing of the importance of each rule to enable conflict resolution. ...
Chapter
The complexity of glucose homeostasis presents a challenge for tight control of blood glucose concentrations (BGC) in response to major disturbances. The nonlinearities and time-varying changes of the BGC dynamics, the occurrence of nonstationary disturbances, time-varying delays on measurements and insulin infusion, and noisy data from sensors provide a challenging system for the AP. In this chapter, a multimodule, multivariate, adaptive AP system is described to deal with several of these challenges simultaneously. Adaptive control systems can tolerate unpredictable changes in a system, and external disturbances by quickly adjusting the controller parameters without any need for knowledge of the initial parameters or conditions of the system. Physiological variables provide additional information that enable feedforward action for measurable disturbances such as exercise. Integration of control algorithms with hypoglycemia alarm module reduces the probability of hypoglycemic events.
... Later, the object-and rule-based hybrid KBS structures were developed. This enabled the use of class-object structures with inheritance to reduce the number of rules drastically and create an efficient reasoning system (Cinar et al. 2007;Tatara and Çinar 2002;Ündey et al. 2003). As the complexity of the system and problem increased, many rules were generated, necessitating a systematic search of rules (depth-first or breadth-first) and prioritizing of the importance of each rule to enable conflict resolution. ...
Chapter
Full-text available
An AP system is challenged by several factors such as meals, exercise, sleep and stress that may have significant effects on glucose dynamics in the body. In this chapter, the relationship between these factors and the glucose dynamics are discussed. Most AP systems are based only on glucose measurements. These systems usually require manual inputs or adjustments by the users about the occurrences of some of these factors such as meals and exercise. Alternatively, multivariable AP systems have been proposed that use biometric variables in addition to glucose measurements to indicate the presence of these factors without a need for manual user input. The effects of different types of insulin as well as use of glucagon in AP systems is also discussed. The chapter includes a discussion of time delays in glucose sensors that affect the performance of predictive hypoglycemia alarm systems and APs.
... Later, the object-and rule-based hybrid KBS structures were developed. This enabled the use of class-object structures with inheritance to reduce the number of rules drastically and create an efficient reasoning system (Cinar et al. 2007;Tatara and Çinar 2002;Ündey et al. 2003). As the complexity of the system and problem increased, many rules were generated, necessitating a systematic search of rules (depth-first or breadth-first) and prioritizing of the importance of each rule to enable conflict resolution. ...
Chapter
The performance of an AP system depends on successful operation of its components. Faults in sensors other hardware and software affect the performance and may force the system to manual operation. Many AP systems use model predictive controllers that rely on models to predict BGC and to calculate the optimal insulin infusion rate. Their performance depends on the accuracy of the models and data used for predictions. Sensor errors and missing signals will cause calculation of erroneous insulin infusion rates. Techniques for fault detection and diagnosis and reconciliation of erroneous data with reliable estimates are presented. Since the models used in the controller may become less accurate with changes in the operating conditions, controller performance assessment is also conducted to evaluate the performance and determine if it can be improved by adjusting the model, parameters or constraints of the controller.
... Later, the object-and rule-based hybrid KBS structures were developed. This enabled the use of class-object structures with inheritance to reduce the number of rules drastically and create an efficient reasoning system (Cinar et al. 2007;Tatara and Çinar 2002;Ündey et al. 2003). As the complexity of the system and problem increased, many rules were generated, necessitating a systematic search of rules (depth-first or breadth-first) and prioritizing of the importance of each rule to enable conflict resolution. ...
Book
Full-text available
Significant progress has been made in finding a cure for diabetes. Research in islet transplantation, islet growth from adult stem cells, and gene-based therapies show good promise and will provides alternatives to cure diabetes. Advances in the treatment of diabetes have offered new technologies that ease the daily burden of people with diabetes, improve their quality of life, and extend their life span. They provide valuable technologies to reduce the impact of diabetes while waiting for a cure. The complexity of glucose homeostasis and the current level of technology challenge tight blood glucose concentration (BGC) regulation. Artificial pancreas (AP) systems that closely mimic the glucose regulating function of a healthy pancreas automate BGC management, dramatically reducing diabetes-related risks and improving lives of people who have the disease. These systems will monitor glucose levels around the clock and automatically infuse the optimal amount of insulin, and potentially other BGC stabilizing hormones, in a timely manner. The nonlinearities and time-varying changes of blood glucose dynamics, the occurrence of non-stationary disturbances, time-varying delays on measurements and insulin infusion, and noisy data from sensors provide challenges for the AP. Several different types of AP system designs have been proposed in recent years. Most systems rely exclusively on continuous glucose measurements and adjust the insulin infusion rate of a pump. Advances in wearable devices that report physiological data in real time enabled the use of additional information and the development of multivariable AP systems. Progress in long-term stable glucagon research enabled the development of dual-hormone AP system designs. Advances in smartphones and communications technologies, and in control theory contributed to the development of powerful control algorithms that can be executed on smartphones and computational capabilities installed in insulin pump systems. Techniques in system monitoring and supervision, fault detection and diagnosis, and performance assessment enabled advanced diagnostics and fault-tolerant control technologies for AP systems. The goal of this book is to introduce recent developments and directions for future progress in AP systems. The material covered represents a culmination of several years of theoretical and applied research carried out by the authors and many prominent research groups around the world. The book starts with some historical background on diabetes and AP systems.The heart of the AP system - sophisticated algorithms that function on a smartphone or similar device - collects information from the sensor of a continuous glucose monitor and wearable devices, computes the optimal insulin dose to infuse and instructs the insulin pump to deliver it. The early chapters of the book provide information about currently available devices, techniques and algorithms to develop AP systems. Then, several factors such as meals, exercise, stress and sleep (MESS) that challenge AP systems are discussed. In later chapters, both empirical (data-driven) and first principles based modeling techniques are presented. Recursive modeling techniques that enable adaptive control of the AP are introduced and integrated with multiple-input models used in adaptive control. Different control strategies such as model predictive, proportional-integral-derivative, generalized predictive, and fuzzy-logic control are introduced. Physiological variables that can provide additional information to enable feedforward action to deal with MESS challenges are proposed. Several additional modules to address the challenges of MESS factors are discussed and a multi-module adaptive multivariable AP system is described. Fault detection and reconciliation of missing or erroneous data and assessment of controller performance are presented to develop modules for fault-tolerant operation of an AP. A summary of recent clinical studies is provided and the directions of future developments is discussed. Over 300 references are listed to provide a database of publications in many AP-related areas.
... The literature of FDI in polymerization reactors is not very extensive. Among these few works, Kaboré et al. (2000) use non-linear high-gain observers, Tatara and Cinar (2002) use knowledge-based systems and Kumar et al. (2003) use statistical approaches. In this paper, linear observers are used in the design of a robust FDI system for on-line detection and isolation of abnormal situations in a styrene polymerization process. ...
Article
Full-text available
The proper operation of the industrial polymerization reactor is a challenging problem and a significant business opportunity for Process System Engineering application, which is in a broad sense, commonly called Polymerization Reactor Engineering. The technical challenges are specific to the particular case, but they are mainly due to some general characteristics such as their complex nonlinear, multivariable and interactive dynamic behavior, their potential open-loop instability and multiple steady-states. Also, they involve highly exothermic reactions, varying process conditions, unknown reaction kinetics and high viscosity, which often lead to a difficult operation of the reactor. Although there are a quite large number of studies on polymerization reactor engineering, they are mainly dedicated to such aspects as designing, modeling, simulation, optimization and control. Very few or almost none of these studies have been focused on the monitoring of critical process parameters. Changes in these parameters can be detrimental to the safety, reliability and efficiency of the process operation. This paper deals with the robust on-line detection and isolation of abnormal situations in an industrial continuous styrene polymerization reactor through a bank of unknown input observers that detect changes on the most relevant process parameters and external disturbances. A model predictive control scheme is implemented aiming at to stabilize the system. This may become an additional difficulty to the detection of abnormal situations as the controller usually hides the effects of changes on the parameters on the system output. In the design of the unknown input observers a linearized model of the process is utilized. The observers are tuned to detect the change of a particular parameter of the reactor model. The procedure takes into account possible uncertainties in these parameters such that a robust detection strategy of the abnormal situation is obtained. Simulation results show a very promising perspective to the proposed strategy.
... Cinar et al have successfully combined multivariate statistical data analysis with expert systems for process fault diagnosis. Basically the multivariate statistical data analysis module developed in MATLAB was converted into C code, and then linked with G2 expert system through a G2 standard interface (GSI) link [32], [34]. Cinar et al exploited only the G2 diagnostic assistant (GDA) capability (i.e., a graphical design tool similar to Simulink/MATLAB). ...
Conference Paper
Full-text available
This paper addresses a practical intelligent multi- agent system for asset management for the petroleum industry, which is crucial for profitable oil and gas facilities operations and maintenance. A research project was initiated to study the feasibility of an intelligent asset management system. Having proposed a conceptual model, architecture, and implementation plan for such a system in previous work and defined its autonomy, communications, and artificial intelligence (AI) requirements, we are proceeding to build a system prototype and simulate it in real time to validate its logical behavior in normal and abnormal process situations. We also conducted a thorough system performance analysis to detect any computational bottlenecks. Although the preliminary system prototype design has limitations, simulation results have demonstrated an effective system logical behavior and performance.
... The problems of fault detection and isolation (FDI) and fault-tolerant control (FTC) of dynamic systems have been the focus of considerable research interest over the past few decades in both the academic and industrial circles (e.g., see [1], [2], [3], [4], [5], [6], [7], [8] and the references therein). Despite the extensive literature on these topics, most of the available results have been developed for spatially homogeneous processes modeled by systems of ordinary differential equations. ...
Conference Paper
This work develops a robust fault detection and isolation (FDI) and fault-tolerant control (FTC) structure for distributed processes modeled by nonlinear parabolic PDEs with control constraints, time-varying uncertain variables and a finite number of output measurements with limited accuracy. To facilitate the controller synthesis and fault diagnosis tasks, a finite-dimensional system that approximates the dominant dynamic modes of the PDE is initially derived and transformed to a form where each dominant mode is excited directly by only one actuator. A robustly stabilizing bounded output feedback controller is then designed for each dominant mode. The controller synthesis procedure facilitates the derivation of (1) an explicit characterization of the fault-free behavior of each mode in terms of a time-varying bound on the dissipation rate of the corresponding Lyapunov function which accounts for the uncertainty and measurement errors, and (2) an explicit characterization of the robust stability region where constraint satisfaction and robustness with respect to uncertainty and measurement errors are guaranteed. Using the fault-free Lyapunov dissipation bounds as thresholds for FDI, the detection and isolation of faults in a given actuator is accomplished by monitoring the evolution of the dominant modes within the corresponding stability region and declaring a fault when the threshold is exceeded. Robustness of the FDI scheme to measurement errors is ensured by confining the FDI region to an appropriate subset of the stability region, and enlarging the FDI thresholds appropriately. It is shown that these safeguards can be tuned by appropriate selection of the sensor configuration. Finally, the implementation of the FTC architecture on the infinite-dimensional system is discussed and the proposed methodology is demonstrated using a diffusion-reaction process example.
... The problems of fault diagnosis and fault-tolerant control (FTC) of dynamic systems have been the focus of considerable research interest over the past few decades in both the academic and industrial circles (e.g., see [1], [2], [3], [4], [5], [6], [7], [8], [9], [10] and the references therein). Despite the extensive literature on these topics, most of the available results have been developed for spatially homogeneous processes modeled by systems of ordinary differential equations. ...
Conference Paper
This paper presents a methodology for the design of integrated robust fault detection and isolation (FDI) and fault-tolerant control (FTC) architecture for transport- reaction processes modeled by nonlinear parabolic partial differential equations (PDEs) with time-varying uncertain variables, actuator constraints and faults. The design is based on an approximate, finite-dimensional system that captures the dominant dynamic modes of the PDE. Initially, an invertible coordinate transformation, obtained with judicious actuator placement, is used to transform the approximate system into an equivalent form where the evolution of each dominant mode is excited directly by only one actuator and decoupled from the rest. For each mode, a robustly stabilizing bounded feedback controller that achieves an arbitrary degree of asymptotic attenuation of the effect of uncertainty is then synthesized and its constrained stability region is explicitly characterized in terms of the constraints, actuator locations and the size of uncertainty. A key idea in the controller synthesis is to shape the healthy closed-loop response of each mode in a prescribed fashion that decouples the effects of uncertainty and other modes on its dynamics, thus allowing (1) the derivation of performance-based FDI rules for each actuator, and (2) an explicit characterization of the state-space regions where FDI can be performed under uncertainty and constraints. Following FDI, a switching law is derived to orchestrate actuator reconfiguration in a way that preserves robust closed- loop stability. Finally, the theoretical results are demonstrated using a diffusion-reaction process example.
... Existing results on the design of fault-detection filters include those that use past plant-data and those that use fundamental process models for the purpose of faultdetection filter design. Statistical and pattern recognition techniques for data analysis and interpretation (for example, [25,39,36,12,35,11,7,42,1,46]) use past plant-data to construct indicators that identify deviations from normal operation to detect faults. The problem of using fundamental process models for the purpose of detecting faults has been studied extensively in the context of linear systems [27,19,20,9,29] and more recently some existential results in the context of nonlinear systems have been derived [40,10]. ...
Article
This work focuses on fault-tolerant control of a gas phase polyethylene reactor. Initially, a family of candidate control configurations, characterized by different manipulated inputs, is identified. For each control configuration, a bounded nonlinear feedback controller, that enforces asymptotic closed-loop stability in the presence of constraints, is designed, and the constrained stability region associated with it is explicitly characterized using Lyapunov-based tools. Next, a fault-detection filter is designed to detect the occurrence of a fault in the control actuator by observing the deviation of the process states from the expected closed-loop behavior. A switching policy is then derived, on the basis of the stability regions, to orchestrate the activation/deactivation of the constituent control configurations in a way that guarantees closed-loop stability in the event of control system faults. Closed-loop system simulations demonstrate the effectiveness of the fault-tolerant control strategy.
... Norvilas et al. (2000) proposed an intelligent SPM framework by interfacing KBS and MV techniques and demonstrated its performance with simulation studies. This system was also extended to sensor validation (Tatara and Cinar, 2002). Integrated use of MSPM techniques and RTKBS for real-time on-line monitoring and FDD of fermentation processes was recently proposed (Undey et al., 2000;Glassey et al., 2000;Leung and Romagnoli, 2002). ...
Article
Real-time supervision of batch operations during the progress of a batch run offers many advantages over end-of-batch quality control. Process monitoring, quality estimation, and fault diagnosis activities are automated and supervised by embedding them into a real-time knowledge-based system (RTKBS). Interpretation of multivariate charts is also automated through a generic rule-base for efficient alarm handling and fault diagnosis. Multivariate statistical techniques such as multiway partial least squares (MPLS) provide a powerful modeling, monitoring, and supervision framework. Online process monitoring techniques are developed and extended to include predictions of end-of-batch quality measurements during the progress of a batch run. The integrated RTKBS and the implementation of MPLS-based process monitoring and quality control are illustrated using a fed-batch penicillin production benchmark process simulator.
... Norvilas et al. (2000) have proposed an intelligent SPM framework by interfacing KBS and MSPM techniques and demonstrated its performance with simulation studies using a continuous polymerization reactor. This system was also extended to include sensor validation (Tatara and Cinar, 2002). Integrated use of MSPM techniques and RTKBS for real-time on-line monitoring and FDD of cultivation processes was recently proposed (Undey et al., 2000;Glassey et al., 2000;Leung and Romagnoli, 2002). ...
Article
Supervision of batch bioprocess operations in real-time during the progress of a batch run offers many advantages over end-of-batch quality control. Multivariate statistical techniques such as multiway partial least squares (MPLS) provide an efficient modeling and supervision framework. A new type of MPLS modeling technique that is especially suitable for online real-time process monitoring and the multivariate monitoring charts are presented. This online process monitoring technique is also extended to include predictions of end-of-batch quality measurements during the progress of a batch run. Process monitoring, quality estimation and fault diagnosis activities are automated and supervised by embedding them into a real-time knowledge-based system (RTKBS). Interpretation of multivariate charts is also automated through a generic rule-base for efficient alarm handling. The integrated RTKBS and the implementation of MPLS-based process monitoring and quality control are illustrated using a fed-batch penicillin production benchmark process simulator.
Book
Use of a membrane within a bioreactor (MBR), either microbial or enzymatic, is a technology that has existed for 30 years to increase process productivity and/or facilitate the recovery and the purification of biomolecules. Currently, this technology is attracting increasing interest in speeding up the process and in better sustainability. In this work, we present the current status of MBR technologies. Fundamental aspects and process design are outlined and emerging applications are identified in both aspects of engineering, i.e., enzymatic and microorganism (bacteria, animal cells, and microalgae), including microscale aspects and wastewater treatment. Comparison of this integrated technology with classical batch or continuous bioreactors is made to highlight the performance of MBRs and identify factors limiting their performance and the different possibilities for their optimization.
Article
A novel networked process monitoring, fault propagation identification, and root cause diagnosis approach is developed in this study. First, process network structure is determined from prior process knowledge and analysis. The network model parameters including the conditional probability density functions of different nodes are then estimated from process operating data to characterize the causal relationships among the monitored variables. Subsequently, the Bayesian inference‐based abnormality likelihood index is proposed to detect abnormal events in chemical processes. After the process fault is detected, the novel dynamic Bayesian probability and contribution indices are further developed from the transitional probabilities of monitored variables to identify the major faulty effect variables with significant upsets. With the dynamic Bayesian contribution index, the statistical inference rules are, thus, designed to search for the fault propagation pathways from the downstream backwards to the upstream process. In this way, the ending nodes in the identified propagation pathways can be captured as the root cause variables of process faults. Meanwhile, the identified fault propagation sequence provides an in‐depth understanding as to the interactive effects of faults throughout the processes. The proposed approach is demonstrated using the illustrative continuous stirred tank reactor system and the Tennessee Eastman chemical process with the fault propagation identification results compared against those of the transfer entropy‐based monitoring method. The results show that the novel networked process monitoring and diagnosis approach can accurately detect abnormal events, identify the fault propagation pathways, and diagnose the root cause variables. © 2013 American Institute of Chemical Engineers AIChE J, 59: 2348–2365, 2013
Article
Statistical process control (SPC) is a sub-area of statistical quality control. Considering the successful results of the SPC applications in various manufacturing and service industries, this field has attracted a large number of experts. Despite the development of knowledge in this field, it is hard to find a comprehensive perspective or model covering such a broad area and most studies related to SPC have focused only on a limited part of this knowledge area. According to many implemented cases in statistical process control, case-based reasoning (CBR) systems have been used in this study for developing of a knowledge-based system (KBS) for SPC to organize this knowledge area. Case representation and retrieval play an important role to implement a CBR system. Thus, a format for representing cases of SPC and the similarity measures for case retrieval are proposed in this paper.
Article
Historical data collected from processes are readily available. This paper looks at recent advances in the use of data-driven models built from such historical data for monitoring, fault diagnosis, optimization and control. Latent variable models are used because they provide reduced dimensional models for high dimensional processes. They also provide unique, interpretable and causal models, all of which are necessary for the diagnosis, control and optimization of any process. Multivariate latent variable monitoring and fault diagnosis methods are reviewed and contrasted with classical fault detection and diagnosis approaches. The integration of monitoring and diagnosis techniques by using an adaptive agent-based framework is outlined and its use for fault-tolerant control is compared with alternative fault-tolerant control frameworks. The concept of optimizing and controlling high dimensional systems by performing optimizations in the low dimensional latent variable spaces is presented and illustrated by means of several industrial examples.
Article
This work addresses the problem of fault detection and isolation for nonlinear processes when some process variable measurements are available at regular sampling intervals and the remaining process variables are measured at an asynchronous rate. First, a fault detection and isolation (FDI) scheme that employs model-based techniques is proposed that allows for the isolation of faults. The proposed FDI scheme provides detection and isolation of any fault that enters into the differential equation of only synchronously measured states and grouping of faults that enter into the differential equation of any asynchronously measured state. For a fully coupled process system, fault detection occurs shortly after a fault takes place, and fault isolation, limited by the arrival of asynchronous measurements, occurs when asynchronous measurements become available. Fault-tolerant control methods with a supervisory control component are then employed to achieve stability in the presence of actuator failures using control system reconfiguration. Numerical simulations of a polyethylene reactor are performed to demonstrate the applicability and performance of the proposed fault detection and isolation and fault-tolerant control method in the presence of asynchronous measurements.
Article
An adaptive agent-based hierarchical framework for fault type classification and diagnosis in continuous chemical processes is presented. Classification techniques such as Fisher’s discriminant analysis (FDA) and partial least-squares discriminant analysis (PLSDA) and diagnosis tools such as variable contribution plots are used by agents in this supervision system. After an abnormality is detected, the classification results reported by different diagnosis agents are summarized via a performance-based criterion, and a consensus diagnosis decision is formed. In the agent management layer of the proposed system, the performances of diagnosis agents are evaluated under different fault scenarios, and the collective performance of the supervision system is improved via performance-based consensus decision and adaptation. The effectiveness of the proposed adaptive agent-based framework for the classification of faults is illustrated using a simulated continuous stirred tank reactor (CSTR) network.
Article
This paper presents an integrated fault detection (FD) and fault-tolerant control (FTC) architecture for spatially distributed processes described by quasi-linear parabolic partial differential equations (PDEs) with control constraints and control actuator faults. Under full state feedback conditions, the architecture integrates model-based fault detection, spatially distributed feedback, and supervisory control to orchestrate switching between different actuator configurations in the event of faults. The various components are designed on the basis of appropriate reduced-order models that capture the dominant dynamics of the distributed process. The fault detection filter replicates the dynamics of the fault-free reduced-order model and uses its behavioral discrepancy from that of the actual system as a residual for fault detection. Owing to the inherent approximation errors in the reduced-order model, appropriate fault detection and control reconfiguration criteria are derived for the implementation of the FTC architecture on the distributed system to prevent false alarms. The criteria is expressed in terms of residual thresholds that capture the closeness of solutions between the fault-free reduced and full-order models. A singular perturbations formulation is used to link these thresholds with the separation between the slow and fast eigenvalues of the spatial differential operator necessary for closed-loop stability. Under output feedback conditions, an appropriate state estimation scheme is incorporated into the control architecture, and the effects of estimation errors are accounted for in the design of the feedback controller, the fault detection filter, and the control reconfiguration logic. The proposed approach is successfully applied to the problem of constrained, actuator fault-tolerant stabilization of an unstable steady state of a representative diffusion-reaction process.
Article
An adaptive hierarchical framework for process supervision and fault-tolerant control with agent-based systems is presented. The framework consists of modules for fault detection and diagnosis (FDD), system identification and distributed control, and a hierarchical structure for performance-based agent adaptation. Multivariate continuous process monitoring methodologies and several fault discrimination and classification techniques are implemented in the FDD modules to be used by multiple agents. In the process supervision layer, the continuous intramodular communication between FDD and control modules communicates the existence of an abnormality in the process, type of the abnormality, and affected process sections to the distributed model predictive control agents. In the agent management layer, the performances of all FDD and control agents are evaluated under specific process conditions. Performance-based consensus criteria are used to prioritize the best-performing agents in consensus decision making in every level of process supervision and fault-tolerant control. The collective performance of the supervision system is improved via performance-based consensus decision making and adaptation. The effectiveness of the proposed adaptive agent-based framework for fault-tolerant control is illustrated using a simulated continuous stirred-tank reactor network. Copyright © 2011 John Wiley & Sons, Ltd.
Article
This paper develops a robust fault detection and isolation (FDI) and fault-tolerant control (FTC) structure for distributed processes modeled by nonlinear parabolic partial differential equations (PDEs) with control constraints, time-varying uncertain variables, and a finite number of sensors that transmit their data over a communication network. The network imposes limitations on the accuracy of the output measurements used for diagnosis and control purposes that need to be accounted for in the design methodology. To facilitate the controller synthesis and fault diagnosis tasks, a finite-dimensional system that captures the dominant dynamic modes of the PDE is initially derived and transformed into a form where each dominant mode is excited directly by only one actuator. A robustly stabilizing bounded output feedback controller is then designed for each dominant mode by combining a bounded Lyapunov-based robust state feedback controller with a state estimation scheme that relies on the available output measurements to provide estimates of the dominant modes. The controller synthesis procedure facilitates the derivation of: (1) an explicit characterization of the fault-free behavior of each mode in terms of a time-varying bound on the dissipation rate of the corresponding Lyapunov function, which accounts for the uncertainty and network-induced measurement errors and (2) an explicit characterization of the robust stability region where constraint satisfaction and robustness with respect to uncertainty and measurement errors are guaranteed. Using the fault-free Lyapunov dissipation bounds as thresholds for FDI, the detection and isolation of faults in a given actuator are accomplished by monitoring the evolution of the dominant modes within the stability region and declaring a fault when the threshold is breached. The effects of network-induced measurement errors are mitigated by confining the FDI region to an appropriate subset of the stability region and enlarging the FDI residual thresholds appropriately. It is shown that these safeguards can be tightened or relaxed by proper selection of the sensor spatial configuration. Finally, the implementation of the networked FDI–FTC architecture on the infinite-dimensional system is discussed and the proposed methodology is demonstrated using a diffusion–reaction process example. Copyright © 2008 John Wiley & Sons, Ltd.
Article
A methodology is presented for the design of integrated, model-based fault diagnosis and reconfigurable control systems for transport-reaction processes modeled by nonlinear parabolic partial differential equations (PDEs) with control constraints and actuator faults. The methodology brings together nonlinear feedback control, fault detection and isolation (FDI), and performance-based supervisory switching between multiple actuator configurations. Using an approximate, finite-dimensional model that captures the PDE's dominant dynamic modes, a stabilizing nonlinear feedback controller is initially designed for each actuator configuration, and its stability region is explicitly characterized in terms of the control constraints and actuator locations. To facilitate the fault diagnosis task, the locations of the control actuators are chosen in a way that ensures that the evolution of each dominant mode, in appropriately chosen coordinates, is excited by only one actuator. Then, a set of dedicated FDI filters, each replicating the fault-free behavior of a given state of the approximate system, are constructed. The choice of actuator locations ensures that the residual of each filter is sensitive to faults in only one actuator and decoupled from the rest, thus, allowing complete fault isolation. Finally, a set of switching rules are derived to orchestrate switching from the faulty actuators to healthy fallbacks in a way that preserves closed-loop stability and minimizes the closed-loop performance deterioration resulting from actuator faults. Precise FDI thresholds and control reconfiguration criteria that account for model reduction errors are derived to prevent false alarms when the reduced order model-based fault-tolerant control structure is implemented on the process. A singular perturbation formulation is used to link these thresholds with the degree of separation between the slow and fast eigenvalues of the spatial differential operator. The developed methodology is successfully applied to the problem of constrained, actuator fault-tolerant stabilization of an unstable steady-state of a representative diffusion-reaction process. © 2007 American Institute of Chemical Engineers AIChE J, 2007
Article
This work considers the problem of fault-tolerant control of nonlinear processes with input constraints subject to control system/actuator failures, and presents and demonstrates an approach to fault-tolerant control predicated upon the idea of integrating fault-detection, feedback and supervisory control. Specifically, a nonlinear observer is initially designed to generate estimates of the states that are used to implement Lyapunov-based state feedback controllers and a fault-detection filter. The fault-detection filter uses the state estimates to compute the expected closed{loop behavior in the absence of faults, and detects the occurrence of faults by comparing the expected behavior of the process variables with the estimates. A switching policy is then derived to orchestrate the activation/deactivation of the constituent control configurations to achieve fault-tolerant control in the event that a failure is detected. Finally, simulation studies are presented to demonstrate the implementation and evaluate the effectiveness of the proposed fault-tolerant control scheme.
Conference Paper
A model-based fault-tolerant control (FTC) structure integrating nonlinear feedback control, state estimation, fault detection and isolation (FDI), and stability-based actuator reconfiguration is developed for distributed processes modeled by nonlinear parabolic PDEs with control constraints, actuator faults and limited state measurements. The design is based on an appropriate finite-dimensional model that approximates the dominant process dynamics. A key idea in the design is the judicious placement of control actuators and measurement sensors across the spatial domain in a way that enhances the FDI and fault-tolerance capabilities of the control system. Using singular perturbation techniques, precise FDI thresholds and control reconfiguration criteria accounting for model reduction and state estimation errors are derived to prevent false alarms when the FTC structure is implemented on the infinite-dimensional system. The criteria are tied to the separation between the slow and fast eigenvalues of the differential operator. Finally, the implementation of the developed architecture is demonstrated using a diffusion- reaction process example.
Conference Paper
This paper presents a fault-tolerant control (FTC) architecture for spatially distributed processes described by quasi-linear parabolic partial differential equations (PDEs) with control constraints and control actuator faults. The architecture integrates model-based fault detection, spatially distributed feedback and supervisory control to orchestrate switching between different actuator configurations in the event of faults. The various components are designed on the basis of appropriate reduced-order models that capture the dominant dynamics of the distributed process. The fault detection filter replicates the dynamics of the fault-free, reduced-order model; and uses the discrepancy from the behavior of the actual system as a residual for fault detection. Owing to the inherent approximation errors in the reduced-order model, appropriate fault detection and control reconfiguration criteria are derived for the implementation of the FTC architecture on the distributed system to prevent false alarms. The criteria is expressed in terms of residual thresholds that capture the closeness of solutions between the fault-free, reduced and full-order models. A singular perturbation formulation is used to link these thresholds with the separation between the slow and fast eigenvalues of the spatial differential operator necessary for closed-loop stability
Article
Full-text available
The Principal Component Analysis (PCA) and the Partial Least Squares (PLS) are two commonly used techniques for process monitoring. Both PCA and PLS assume that the data to be analysed are not self-correlated i.e. time-independent. However, most industrial processes are dynamic so that the assumption of time-independence made by the PCA and the PLS is invalid in nature. Dynamic extensions to PCA and PLS, so called DPCA and DPLS, have been developed to address this problem, however, unsatisfactorily. Nevertheless, the Canonical Variate Analysis (CVA) is a state-space-based monitoring tool, hence is more suitable for dynamic monitoring than DPCA and DPLS. The CVA is a linear tool and traditionally for simplicity, the upper control limit (UCL) of monitoring metrics associated with the CVA is derived based on a Gaussian assumption. However, most industrial processes are nonlinear and the Gaussian assumption is invalid for such processes so that CVA with a UCL based on this assumption may not be able to correctly identify underlying faults. In this work, a new monitoring technique using the CVA with UCLs derived from the estimated probability density function through kernel density estimations (KDEs) is proposed and applied to the simulated nonlinear Tennessee Eastman Process Plant. The proposed CVA with KDE approach is able to significantly improve the monitoring performance and detect faults earlier when compared to other methods also examined in this study.
Article
This contribution describes how disturbances in a control system can be isolated and diagnosed automatically based on plant topology. In order to demonstrate this, a prototype software has been designed and implemented which, when given an electronic process schematic of a plant and results from a data-driven analysis, allows the user to pose queries about the plant and to find root causes of plant-wide disturbances. This hybrid system puts together two new technologies: the plant topology information written in XML according to the computer aided engineering exchange (CAEX) schema and the results of a signal analysis tool called plant-wide disturbance analysis (PDA). The isolation and diagnosis of the root causes of plant-wide disturbances is enhanced when process connectivity is considered alongside the results of data-driven analysis.
Article
This paper presents a selected survey covering the advances of fault diagnosis and fault tolerant control using data driven techniques. A brief summary of the general developments in fault detection and diagnosis for industrial processes is given, which is then followed by discussions on the widely used data driven and knowledge-based techniques. A successful application example is also given, which deals with faults caused by the misplacement of control loop set points and several areas of potential future directions are included in the paper.
Article
Amongst process monitoring techniques the Principal Component Analysis (PCA) and the Partial Least Squares Regression Analysis (PLS) assume that the observations at different times are independent. However, for most industrial processes, these assumptions are invalid because of their dynamic features. For dynamic processes, the Canonical Variate Analysis (CVA) based approach is more appropriate than the PCA and the PLS based approaches. The CVA model is linear and control limits associated with the CVA are traditionally derived based on the Gaussian assumption. However, most industrial processes are non-linear and the Gaussian assumption is invalid for such processes so that techniques based on this assumption may not be able to correctly identify underline faults. In this work, a new monitoring technique using the CVA with control limits derived from the estimated probability density function through kernel density estimation (KDE) is proposed and applied to the Tennessee Eastman Process Plant. The proposed CVA with KDE approach is able to significantly improve the monitoring performance compared to other methods mentioned above.
Conference Paper
This work proposes a simple, robust, efficient, and practicable method to automatically flag poor control performance. It uses only the run length of the actuating errors. Run length is defined as a state, and transitions between states are then modeled as a Markov chain. Transition probabilities are then compared with the control limits established from a user-defined period of good control.
Article
Full-text available
This paper gives an overview of industrial applications of real-time knowledge based expert systems (KBESs) for process control. After a brief overview of the features of a KBES useful in process control, several case studies are reviewed. The lessons learned are summarized
Conference Paper
Full-text available
This paper gives an overview of industrial applications of real-time knowledge based expert systems (KBESs) for process control. After a brief overview of the features of a KBES useful in process control, several case studies are reviewed. The lessons learned are summarized.
Article
Full-text available
Many chemical processes have a very large number of measured variables that are recorded frequently. Often, many of these variables are highly correlated and thus provide some redundant information concerning the state of the process and its sensors. When this is the case, multivariate techniques such as Principal Components Analysis and Partial Least Squares calibration can be used in conjunction with Statistical Process Control methods to identify process upsets and sensor failures. We refer to this combination of technologies as Multivariate Statistical Process Control (MSPC). Examples are shown for two types of sensor failure. The first class is where sensors develop a bias. The second class is where sensors become corrupted by noise. We show how confidence limits can be put on PCA residuals for the purpose of detecting failed sensors of both types. This method is compared to a PLS based method where individual variables are calibrated against other system variables and the prediction residual is used in a manner similar to the use of PCA residuals.
Article
The problem of intelligence in a system identification related context has been discussed. Some of the reasons for the development of intelligent software for system identification have been presented. The problem of choosing a software solution for intelligent identification has been illustrated for the example of ARX model identification. Two different software approaches have been compared: the traditional approach based on decision trees and the expert-system approach using knowledge bases. As a result of this comparison no convincing reasons have been found for the superiority of the second approach over the first. On the contrary, it seems that because of the nature of identification problems, the first approach is more basic and more cost-effective for the development of intelligent identification software.
Article
An expert system for system identification written with the OPS83 knowledge-based programming language is presented. At the end of an expertise, it provides the user with a set of good models for the system under investigation. If the sampling period used to collect the data seems to be unadapted, the expert system will modify it. An intelligent search through the set of all admissible models is made in order to find the best models of the system. Some validation criteria are used to classify the models and a complete set of facilities is at the user's disposal that allows to modify the expert system behaviour at execution time. One advantage of the expert system approach is that one can not only change decision parameters very easily (such as confidence levels) but one can also change existing rules or add new rules at the price of only one more compilation. Finally, some simulations on data from industrial processes have shown that the expert system behaves just as well as human experts while on simulated noisy data, it finds the true model in the class of ARX or ARARX (also called GLS) models that was used to produce them.
Article
Industrial continuous processes are usually operated under closed-loop control, yielding process measurements that are autocorrelated, cross correlated, and collinear. A statistical process monitoring (SPM) method based on state variables is introduced to monitor such processes. The statistical model that describes the in-control variability is based on a canonical variate (CV) state space model. The CV state variables are linear combinations of the past process measurements which explain the variability of the future measurements the most, and they are regarded as the principal dynamic dimensions. ATstatistic based on the CVstate variablesis utilized for developing the SPM procedure. The CV state variables are also used for monitoring sensor reliability. An experimental application to a high temperature short time (HTST) pasteurization process illustrates the proposed methodology.
Article
Multivariate Statistical Process Performance Monitoring (MSPPM) provides a diagnostic tool for the monitoring and detection of process malfunctions for continuous and batch manufacturing processes. This paper initially reviews the concept of process performance monitoring through an industrial application to a fluidised bed-reactor and a simulation of a batch methyl methacrylate polymerisation reactor, prior to describing some of the more recent work being carried out. This includes the development of performance monitoring schemes from minimal process data, the use of multi-block techniques for plant-wide monitoring and the development of generic models for the monitoring of multiple products, grades or recipes.
Article
This paper described a fault diagnosis expert system based on Possible Cause and Effect Graph methodology which is an enhanced Signed Digraph approach. This expert system incorporated Bayesian belief theorem and explanation capacity. Causal relationships in single loop and cascade controllers were remodeled to suit the implementation. In dealing with recycle loops between process variables, a knowledge-base containing roles was employed to “break” the cyclic loop dynamically. During the diagnosis phase, the system dynamically modifies the causal network and adjusts the conditional probability using all available plant information and other processing techniques such as data reconciliation. This expert system has been implemented successfully on a pilot scale distillation column, with very promising results.
Article
Industrial continuous processes may have a large number of process variables and are usually operated for extended periods at fixed operating points under closed-loop control, yielding process measurements that are autocorrelated, cross-correlated, and collinear. A statistical process monitoring (SPM) method based on multivariate statistics and system theory is introduced to monitor the variability of such processes. The statistical model that describes the in-control variability is based on a canonical-variate (CV) state-space model that is an equivalent representation of a vector autoregressive moving-average time-series model. The CV state variables obtained from the state-space model are linear combinations of the past process measurements that explain the variability of the future measurements the most. Because of this distinctive feature, the CV state variables are regarded as the principal dynamic directions A T2 statistic based on the CV state variables is used for developing an SPM procedure. Simple examples based on simulated data and an experimental application based on a high-temperature short-time milk pasteurization process illustrate advantages of the proposed SPM method.
Article
When selecting the tools available to build a knowledge-based system, aspects of knowledge representation and control strategy as well as the general programming environment must be evaluated in light of the task to be performed. The knowledge representation strategy is the method by which the domain knowledge of interest is represented or stored in the knowledge base of the computing device. The control strategy is the method used to reason or make inferences about the knowledge contained in the knowledge base.
Article
Statistical process control methods for monitoring processes with multivariate measurements in both the product quality variable space and the process variable space are considered. Traditional multivariate control charts based on X2 and T2 statistics are shown to be very effective for detecting events when the multivariate space is not too large or ill-conditioned. Methods for detecting the variable(s) contributing to the out-of-control signal of the multivariate chart are suggested. Newer approaches based on principal component analysis and partial least squares are able to handle large ill-conditioned measurement space; they also provide diagnostics which can point to possible assignable causes for the event. The me hods are illustrated on a simulated process of a high pressure low density polyethylene reactor, and examples of their application to a variety of industrial processes are referenced.
Article
Industrial continuous processes are usually operated under closed-loop control, yielding process measurements that are autocorrelated, cross correlated, and collinear. A statistical process monitoring (SPM) method based on state variables is introduced to monitor such processes. The statistical model that describes the in-control variability is based on a canonical variate (CV) state space model. The CV state variables are linear combinations of the past process measurements which explain the variability of the future measurements the most, and they are regarded as the principal dynamic dimensions. A T2 statistic based on the CV state variables is utilized for developing the SPM procedure. The CV state variables are also used for monitoring sensor reliability. An experimental application to a high temperature short time (HTST) pasteurization process illustrates the proposed methodology.
Article
Principal components and factor analysis are two techniques which are finding increasing application to quality engineers who are concerned with processes with more than one response variable. In this, the first of a three-part series, the concept of principal components is introduced. Estimations, significance tests, and residual analysis are presented along with two numerical examples. Parts two and three will be found in succeeding issues.
Article
This paper deals with an identification package. In the first part, the various classical off-line parameter estimation methods implemented are described. A graphic editor allows one to prepare the input-output data easily, which ensures a correct estimation. However, good methods and robust algorithms are not sufficient to guarantee a successful use of identification in industry. Another necessary component is the know-how of the person carrying it out. In the second part, the expert system SEXI (“Système EXpert en Identification”) is considered, which uses the software mentioned above so as to determine the model structure. SEXI behaves as a supervisor: it selects and runs the appropriate numerical module to obtain the quantitative results necessary for its reasoning and it iterates this process until a relevant model is found.
Article
Multivariable additive NARX (nonlinear autoregressive with exogenous inputs) modeling of process systems is presented. The model structure is similar to that of a generalized additive model (GAM) and is estimated with a nonlinear canonical variate analysis (CVA) algorithm called CANALS. The system is modeled by partitioning the data into two groups of variables. The first is a collection of future outputs, and the second is a collection of past input and outputs and future inputs. This approach is similar to linear subspace state-space modeling. An illustrative example of modeling is presented on the basis of a simulated continuous chemical reactor that exhibits multiple steady states in the outputs for a fixed level of the input.
Article
An expert system is presented for automated time series analysis of laboratory sample input signals. The system, AUTOCORR, builds a model of the time series by identifying the processes that are present. These are an uncorrelated random process and, underlying this, possibly one or more of the following: a first-order autoregressive process, a trend and a periodic process. AUTOCORR has a knowledge base of 44 rules and 41 facts for this purpose. The employed shell, INFER, allows the use of algorithmic procedures. Elaborate tests with simulated signals show that AUTOCORR has a very low false positive score and is successful in describing time series for laboratory simulation models.
Article
In this paper, we present two novel algorithms to realize a finite dimensional, linear time-invariant state-space model from input-output data. The algorithms have a number of common features. They are classified as one of the subspace model identification schemes, in that a major part of the identification problem consists of calculating specially structured subspaces of spaces defined by the input-output data. This structure is then exploited in the calculation of a realization. Another common feature is their algorithmic organization: an RQ factorization followed by a singular value decomposition and the solution of an overdetermined set (or sets) of equations. The schemes assume that the underlying system has an output-error structure and that a measurable input sequence is available. The latter characteristic indicates that both schemes are versions of the MIMO Output-Error State Space model identification (MOESP) approach. The first algorithm is denoted in particular as the (elementary MOESP scheme). The subspace approximation step requires, in addition to input-output data, knowledge of a restricted set of Markov parameters. The second algorithm, referred to as the (ordinary MOESP scheme), solely relies on input-output data. A compact implementation is presented of both schemes. Although we restrict our presentation here to error-free input-output data, a framework is set up in an identification context. The identification aspects of the presented realization schemes are treated in the forthcoming Parts 2 and 3.
Article
Hazard and operability (HAZOP) analysis is the study of systematically identifying every conceivable abnormal process deviation, its abnormal causes and adverse hazardous consequences in a chemical plant. HAZOP analysis is a difficult, time-consuming, and labor-intensive activity. A automated HAZOP system can reduce the time and effort involved in a HAZOP review, make the review more thorough and detailed, and minimize or eliminate human errors. Towards that goal, a knowledge-based system, called HAZOPExpert, has been proposed in this article. In this approach, HAZOP knowledge is divided into process-specific and processindependent components in a model-based manner. The framework allows for these two components to interact during the analysis to address the process-specific aspects of HAZOP analysis while maintaining the generality of the system. Process-general knowledge is represented as HAZOP models that are developed in a process-independent manner and are applicable to a wide variety of process flowsheets. The important features of HAZOPExpert and its performance on an industrial case study are described.
Conference Paper
The wealth of process information generated from sensor readings can be used to detect bias changes, drift and/or higher levels of noise in various process sensors. New multivariate statistical techniques permit frequent audits of process sensors. These methods are based on the evaluation of residuals generated by utilizing plant models developed with principal components analysis (PCA) or partial least squares (PLS) methods. The fact that the prediction of each variable in the process involves all the other process variables (PLS) and even itself (PCA), may cause false alarms even though the related sensors function properly. A multipass PLS regression technique is proposed to eliminate the false alarms. The sensor with the highest corruption is discarded from both the calibration and the test data when a sensor failure is detected. This eliminates the effect of the corrupted data on the prediction of the remaining process variables and prevents false alarms. The technique is applied to a High Temperature Short Time (HTST) Pasteurization Pilot plant with six temperature and one flow rate measurements.
Conference Paper
Very general reduced order filtering and modeling problems are phased in terms of choosing a state based upon past information to optimally predict the future as measured by a quadratic prediction error criterion. The canonical variate method is extended to approximately solve this problem and give a near optimal reduced-order state space model. The approach is related to the Hankel norm approximation method. The central step in the computation involves a singular value decomposition which is numerically very accurate and stable. An application to reduced-order modeling of transfer functions for stream flow dynamics is given.
Article
An expert system for the identification of linear multiple input single output systems in ARARX form, written with the OPS83 rule-based programming language, is presented. Provided with a data set, the expert system will organize an intelligent search through the set of candidate structures, and end up with a “best” model according to a “quality index” that incorporates a number of validation criteria. Tests on both industrial and simulated data have shown that the expert system behaves as well as human experts, with considerable savings in time.
Article
Recent advances in artificial intelligence have changed the fundamental assumptions upon which the progress of computer-aided process engineering (modeling and methodologies) during the last 30 yr has been founded. Thus, in certain instances, numerical computations today constitute inferior alternatives to qualitative and/or semi-quantitative models and procedures which can capture and utilize more broadly-based sources of knowledge. In this paper it will be shown how process development and design, as well as planning, scheduling, monitoring, analysis and control of process operations can benefit from improved knowledge-representation schemes and advanced reasoning control strategies. It will also be argued that the central challenge coming from research advances in artificial intelligence is "modeling the knowledge", i.e. modeling: (a) physical phenomena and the systems in which they occur; (b) information handling and processing systems; and (c) problem-solving strategies in design, operations and control. Thus, different strategies require different forms of declarative knowledge, and the success or failure of various design, planning, diagnostic and control systems depends on the extent of actively utilizable knowledge. Furthermore, this paper will outline the theoretical scope of important contributions from AI and what their impact has been and will be on the formulation and solution of process engineering problems.
Article
The paper reviews the state of the art of fault detection and isolation in automatic processes using analytical redundancy, and presents some new results. It outlines the principles and most important techniques of model-based residual generation using parameter identification and state estimation methods with emphasis upon the latest attempts to achieve robustness with respect to modelling errors. A solution to the fundamental problem of robust fault detection, providing the maximum achievable robustness by decoupling the effects of faults from each other and from the effects of modelling errors, is given. This approach not only completes the theory but is also of great importance for practical applications. For the case where the prerequisites for complete decoupling are not given, two approximate solutions—one in the time domain and one in the frequency domain—are presented, and the crossconnections to earlier approaches are evidenced. The resulting observer schemes for robust instrument fault detection, component fault detection, and actuator fault detection are briefly discussed. Finally, the basic scheme of fault diagnosis using a combination of analytical and knowledge-based redundancy is outlined.
Article
Recently a great deal of attention has been given to numerical algorithms for subspace state space system identification (N4SID). In this paper, we derive two new N4SID algorithms to identify mixed deterministic-stochastic systems. Both algorithms determine state sequences through the projection of input and output data. These state sequences are shown to be outputs of non-steady state Kalman filter banks. From these it is easy to determine the state space system matrices. The N4SID algorithms are always convergent (non-iterative) and numerically stable since they only make use of QR and Singular Value Decompositions. Both N4SID algorithms are similar, but the second one trades off accuracy for simplicity. These new algorithms are compared with existing subspace algorithms in theory and in practice.
Article
An intelligent process monitoring and fault diagnosis environment has been developed by interfacing multivariate statistical process monitoring (MSPM) techniques and knowledge-based systems (KBS) for monitoring multivariable process operation. The real-time KBS developed in G2 is used with multivariate SPM methods based on canonical variate state space (CVSS) process models. Fault detection is based on T 2 charts of state variables. Contribution plots in G2 are used for determining the process variables that have contributed to the out-of-control signal indicated by large T 2 values, and G2 Diagnostic Assistant (GDA) is used to diagnose the source causes of abnormal process behavior. The MSPM modules developed in Matlab are linked with G2. This intelligent monitoring and diagnosis system can be used to monitor multivariable processes with autocorrelated, crosscorrelated, and collinear data. The structure of the integrated system is described and its performance is illustrated by simulation studies.
Article
This paper describes an expert system interface, named (ihs), for the interactive data analysis and system identification program Idpac. The interface works as an intelligent help system. The system is completely noninvasive and uses the previous command history to understand what the user is doing and gives help according to this. This way of monitoring the user's activities is called the command spy strategy. Scripts are used for representing procedural knowledge, and production rules for diagnostic knowledge. The system has been implemented and a knowledge database handling system identification with the maximum-likelihood method has been developed. An example run with the system is included.
Conference Paper
A knowledge-based system for the automatic identification of dynamic systems (EIS) is presented. It uses some initial informations from the user and runs through several phases of an identification procedure. For that, heuristic and analytic knowledge is implemented. In case the verification of the resulting model is not successful, additional examinations have to be done for the completion of EIS. The behaviour of the system has been tested successfully in practice
Article
Deals with the problem of rule based identification for unknown systems. Several rules have been established which give a desirable estimate for the time delay, minimum structure order and unknown parameters, by using only one open-loop step response test. These rules actually adjust the poles and zeros of the proposed linear model to make its response as close as possible to that of the original open-loop system. It has also been shown that this method can be directly used to identify the unknown parameters of MIMO systems. Finally, this rule based identifier is applied to three real experimental systems: a jet engine speed control system (SISO), a temperature process control system (SISO) and a coupled electric drive control system (MIMO). For all the systems, desirable identification results have been obtained
Article
The development and evaluation of an expert advisor for system identification (EASI) is described. The principles behind a suitable knowledge representation paradigm for a system which includes both intelligent tutoring and software package front-end concepts are outlined. The system has been constructed using an expert system 'text-animation' shell with linkage into Prolog-2 to provide enhancements. These extensions have included an online dictionary and bibliographic material. The identification domain aspects covered include experimental design, model structure determination, estimation algorithms, with applications to SISO and MIMO linear systems, and SISO nonlinear dynamics. Different levels of user-model are provided, and the advisor acts as a front-end to a number of in-house identification packages.
Article
Statistical process control (SPC) is a tool for achieving and maintaining product quality. Classical univariate statistical techniques have focused on the monitoring of one quality variable at a time and are not appropriate for analysing process data where variables exhibit collinear behaviour. Minimal information is derived on the interactions between variables which are so important in complex manufacturing processes. These limitations are addressed through the application of multivariate statistical process control (MSPC). The bases of MSPC are the projection techniques of principal components analysis and projection to latent structures. The philosophy behind these approaches is to reduce the dimensionality of the problem by forming a new set of latent variables to obtain an enhanced understanding of the process behaviour. If the variables are highly correlated, then the process can be defined in terms of a reduced set of latent variables, which are a linear combination of the original variables. The authors present an overview of multivariate statistical process control and its nonlinear extension for process monitoring. The power of the methodology is demonstrated by application to two industrial processes
Article
A quality expert system (QES) prototype for use in the hot strip mill aspect of the steel-making process is described. The QES monitors the process and recommends actions to the user, but it does not directly control the process. The user must adjust operating parameters or change process conditions to improve quality. The QES's architecture and its knowledge acquisition and knowledge representation are discussed. The way in which the prototype QES would handle a slab with a scale defect is described.< >
Article
For a physical system whose operating state is monitored by various sensors, one of the crucial steps involved in fault monitoring and diagnosis process is to validate the sensor values. A sensor value can be validated by observing redundant measurement values. When numerous sensors are installed at different locations in a system and if there exist certain relationships among the measured parameters, the redundancies of the sensors can be viewed as embedded throughout the system. In this paper, a technique is proposed that can systematically explore such embedded redundancies of the sensors in a system and utilize them in quickly validating sensor values. The technique is based on causal relations and their interrelations within sensor redundancy graphs (SRG's) as defined in this paper. Any sensor in an SRG can potentially benefit from any other sensor involved in the same SRG in validation. A validity level is defined and used to express the strength of the validity of a sensor value as supported by varying degrees of evidence. The validation results also yield valuable clues to the systems' fault diagnosis knowledge-based systems on the occurrences of system faults and their locations
Article
In this paper it is shown that a natural representation of a state space is given by the predictor space, the linear space spanned by the predictors when the system is driven by a Gaussian white noise input with unit covariance matrix. A minimal realization corresponds to a selection of a basis of this predictor space. Based on this interpretation, a unifying view of hitherto proposed algorithmically defined minimal realizations is developed. A natural minimal partial realization is also obtained with the aid of this interpretation.
SEXI: An expert identification package Intelligent process monitoring by interfacing knowledge-based systems and multivariate statistical monitoring System identification, reduced-order filtering and modeling via canonical variate analysis
  • ͓24͔
  • S Gentil
  • A Barraud
  • K A Szafnicki
  • A Negiz
  • J Decicco
  • A Cinar
͓24͔ Gentil, S., Barraud, A., and Szafnicki, K., SEXI: An expert identification package. Automatica 26, 803– 809 ͑1990͒. ͓25͔ Norvilas, A., Negiz, A., DeCicco, J., and Cinar, A., Intelligent process monitoring by interfacing knowledge-based systems and multivariate statistical monitoring. J. Process Control 10, 341–350 ͑2000͒. ͓26͔ Larimore, W. E., System identification, reduced-order filtering and modeling via canonical variate analysis. Proc. Automatic Control Conference, San Francisco, 1983, pp. 445– 451.
Stochastic theory of minimal realization Introduction to Statistical Quality Control Mul-tivariate statistical process control and process perfor-mance monitoring On the detection of multiple sensor abnormalities in multivariable processes
  • ͓27͔
  • H Akaike
  • ͑1974͒
  • D C ͓28͔ Montgomery
  • E B ͓29͔ Martin
  • A J Morris
  • C Kiparissides
͓27͔ Akaike, H., Stochastic theory of minimal realization. IEEE Trans. Autom. Control 19, 667– 674 ͑1974͒. ͓28͔ Montgomery, D.C., Introduction to Statistical Quality Control. Wiley, New York, 1997. ͓29͔ Martin, E. B., Morris, A. J., and Kiparissides, C., Mul-tivariate statistical process control and process perfor-mance monitoring. DYCOPS 5, Corfu, Greece, 1998. ͓30͔ Negiz, A. and Cinar, A., On the detection of multiple sensor abnormalities in multivariable processes. Proc. American Control Conference, Chicago, IL 1992, 2364 –2368.
Rule based identifier for un-known systems Intelli-gent identification: Decision trees or knowledge bases?
  • H ͓22͔ Wang
  • A Jones
  • ͓23͔
  • A Niederlinski
  • J Kasprzyk
  • J Figwer
͓22͔ Wang, H. and Jones, A., Rule based identifier for un-known systems. IEE Proc.-D: Control Theory Appl. 138, 500–506 ͑1991͒. ͓23͔ Niederlinski, A., Kasprzyk, J., and Figwer, J., Intelli-gent identification: Decision trees or knowledge bases? Proc. SYSID '94, Copenhagen, Denmark 1994, pp. 471– 476.
Gensym Corporation, Cam-bridge The dynamic behavior of free-radical so-lution polymerization in continuous stirred tank reac-tors
  • ͓31͔ G
  • F Teymour
͓31͔ G2 Reference Manual. Gensym Corporation, Cam-bridge, 1997. ͓32͔ Teymour, F., The dynamic behavior of free-radical so-lution polymerization in continuous stirred tank reac-tors, PhD thesis, University of Wisconsin, Madison, 1989. ͓33͔ Cohen, S. D. and Hindmarsh, A. C., CVODE Users Guide, 1994.
  • Eric Tatara
  • C Ali
  • Inar
Eric Tatara, Ali C ¸ inar / ISA Transactions 41 (2002) 255–270
  • S D Cohen
  • A C Hindmarsh
Cohen, S. D. and Hindmarsh, A. C., CVODE Users Guide, 1994.
Experiences using knowledge-based reasoning in online control systems, in: IFAC Sympo-Fig. 14. Explanation of a sensor fault after a bias change is introduced to the reactor conversion sensor. sium on Computer Aided Design in Control Systems
  • G M Stanley
Stanley, G. M., Experiences using knowledge-based reasoning in online control systems, in: IFAC Sympo-Fig. 14. Explanation of a sensor fault after a bias change is introduced to the reactor conversion sensor. sium on Computer Aided Design in Control Systems, Swansea, UK, July, 1991.
The dynamic behavior of free-radical solution polymerization in continuous stirred tank reactors
  • F Teymour
Teymour, F., The dynamic behavior of free-radical solution polymerization in continuous stirred tank reactors, PhD thesis, University of Wisconsin, Madison, 1989.