Article

Real-time batch process supervision by integrated knowledge-based systems and multivariate statistical methods

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Real-time supervision of batch operations during the progress of a batch run offers many advantages over end-of-batch quality control. Process monitoring, quality estimation, and fault diagnosis activities are automated and supervised by embedding them into a real-time knowledge-based system (RTKBS). Interpretation of multivariate charts is also automated through a generic rule-base for efficient alarm handling and fault diagnosis. Multivariate statistical techniques such as multiway partial least squares (MPLS) provide a powerful modeling, monitoring, and supervision framework. Online process monitoring techniques are developed and extended to include predictions of end-of-batch quality measurements during the progress of a batch run. The integrated RTKBS and the implementation of MPLS-based process monitoring and quality control are illustrated using a fed-batch penicillin production benchmark process simulator.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In the design of a downhole measurement system, the reliability improvement of the system must be considered because downhole situations are very harsh and complex [29][30][31]. In addition, the power consumption of the system, the signal processing and transmission, and the PC display should also be considered comprehensively. ...
Article
Full-text available
Measurement while drilling (MWD) technology is important for obtaining downhole parameters. As an essential parameter in drilling engineering, lateral force provides a powerful reference basis for judging the downhole drilling direction. Although stress measurement technology has matured, research on downhole lateral forces, especially near-bit lateral forces during the drilling process in petroleum exploration, is lacking. Based on the force analysis of a short measurement circuit, the lateral force measurement value of a drill bit and centralizer was converted to a drill string’s radial-bending-force measurement. Two perpendicular lateral force components were measured using strain gauge technology, a lateral force measurement theory model was established, and a set of MWD systems was designed according to the model. The systems’ function was verified through laboratory and field tests, and the field-test data were successfully obtained in the field application. The test results showed that the tested MWD system had acceptable accuracy, stability, and reliability and had the application potential to measure lateral force in the drilling industry. This article provides a new idea to study lateral force while drilling, which is of great significance to oil-drilling exploration and development.
... Later, the object-and rule-based hybrid KBS structures were developed. This enabled the use of class-object structures with inheritance to reduce the number of rules drastically and create an efficient reasoning system (Cinar et al. 2007;Tatara and Çinar 2002;Ündey et al. 2003). As the complexity of the system and problem increased, many rules were generated, necessitating a systematic search of rules (depth-first or breadth-first) and prioritizing of the importance of each rule to enable conflict resolution. ...
Chapter
Alarm systems warn people with T1D when hypoglycemia occurs or can be predicted to occur in the near future if the current glucose concentration trends continue. Various alarm system development strategies are outlined in this chapter. Severe hypoglycemia has significant effects ranging from dizziness to diabetic coma and death while long periods of hyperglycemia cause damage to the vascular system. Fear of hypoglycemia is a major concern for many people with T1D. High doses of exogenous insulin relative to food, activity and low blood glucose levels can precipitate hypoglycemia. Hypoglycemia and hyperglycemia early alarm systems would be very beneficial for people with T1D to warn them or their caregivers about the potential hypoglycemia and hyperglycemia episode before it happens and empowers them to take measures to prevent these events.
... Later, the object-and rule-based hybrid KBS structures were developed. This enabled the use of class-object structures with inheritance to reduce the number of rules drastically and create an efficient reasoning system (Cinar et al. 2007;Tatara and Çinar 2002;Ündey et al. 2003). As the complexity of the system and problem increased, many rules were generated, necessitating a systematic search of rules (depth-first or breadth-first) and prioritizing of the importance of each rule to enable conflict resolution. ...
Chapter
The complexity of glucose homeostasis presents a challenge for tight control of blood glucose concentrations (BGC) in response to major disturbances. The nonlinearities and time-varying changes of the BGC dynamics, the occurrence of nonstationary disturbances, time-varying delays on measurements and insulin infusion, and noisy data from sensors provide a challenging system for the AP. In this chapter, a multimodule, multivariate, adaptive AP system is described to deal with several of these challenges simultaneously. Adaptive control systems can tolerate unpredictable changes in a system, and external disturbances by quickly adjusting the controller parameters without any need for knowledge of the initial parameters or conditions of the system. Physiological variables provide additional information that enable feedforward action for measurable disturbances such as exercise. Integration of control algorithms with hypoglycemia alarm module reduces the probability of hypoglycemic events.
... Later, the object-and rule-based hybrid KBS structures were developed. This enabled the use of class-object structures with inheritance to reduce the number of rules drastically and create an efficient reasoning system (Cinar et al. 2007;Tatara and Çinar 2002;Ündey et al. 2003). As the complexity of the system and problem increased, many rules were generated, necessitating a systematic search of rules (depth-first or breadth-first) and prioritizing of the importance of each rule to enable conflict resolution. ...
Chapter
Full-text available
An AP system is challenged by several factors such as meals, exercise, sleep and stress that may have significant effects on glucose dynamics in the body. In this chapter, the relationship between these factors and the glucose dynamics are discussed. Most AP systems are based only on glucose measurements. These systems usually require manual inputs or adjustments by the users about the occurrences of some of these factors such as meals and exercise. Alternatively, multivariable AP systems have been proposed that use biometric variables in addition to glucose measurements to indicate the presence of these factors without a need for manual user input. The effects of different types of insulin as well as use of glucagon in AP systems is also discussed. The chapter includes a discussion of time delays in glucose sensors that affect the performance of predictive hypoglycemia alarm systems and APs.
... Later, the object-and rule-based hybrid KBS structures were developed. This enabled the use of class-object structures with inheritance to reduce the number of rules drastically and create an efficient reasoning system (Cinar et al. 2007;Tatara and Çinar 2002;Ündey et al. 2003). As the complexity of the system and problem increased, many rules were generated, necessitating a systematic search of rules (depth-first or breadth-first) and prioritizing of the importance of each rule to enable conflict resolution. ...
Chapter
The performance of an AP system depends on successful operation of its components. Faults in sensors other hardware and software affect the performance and may force the system to manual operation. Many AP systems use model predictive controllers that rely on models to predict BGC and to calculate the optimal insulin infusion rate. Their performance depends on the accuracy of the models and data used for predictions. Sensor errors and missing signals will cause calculation of erroneous insulin infusion rates. Techniques for fault detection and diagnosis and reconciliation of erroneous data with reliable estimates are presented. Since the models used in the controller may become less accurate with changes in the operating conditions, controller performance assessment is also conducted to evaluate the performance and determine if it can be improved by adjusting the model, parameters or constraints of the controller.
... Later, the object-and rule-based hybrid KBS structures were developed. This enabled the use of class-object structures with inheritance to reduce the number of rules drastically and create an efficient reasoning system (Cinar et al. 2007;Tatara and Çinar 2002;Ündey et al. 2003). As the complexity of the system and problem increased, many rules were generated, necessitating a systematic search of rules (depth-first or breadth-first) and prioritizing of the importance of each rule to enable conflict resolution. ...
Book
Full-text available
Significant progress has been made in finding a cure for diabetes. Research in islet transplantation, islet growth from adult stem cells, and gene-based therapies show good promise and will provides alternatives to cure diabetes. Advances in the treatment of diabetes have offered new technologies that ease the daily burden of people with diabetes, improve their quality of life, and extend their life span. They provide valuable technologies to reduce the impact of diabetes while waiting for a cure. The complexity of glucose homeostasis and the current level of technology challenge tight blood glucose concentration (BGC) regulation. Artificial pancreas (AP) systems that closely mimic the glucose regulating function of a healthy pancreas automate BGC management, dramatically reducing diabetes-related risks and improving lives of people who have the disease. These systems will monitor glucose levels around the clock and automatically infuse the optimal amount of insulin, and potentially other BGC stabilizing hormones, in a timely manner. The nonlinearities and time-varying changes of blood glucose dynamics, the occurrence of non-stationary disturbances, time-varying delays on measurements and insulin infusion, and noisy data from sensors provide challenges for the AP. Several different types of AP system designs have been proposed in recent years. Most systems rely exclusively on continuous glucose measurements and adjust the insulin infusion rate of a pump. Advances in wearable devices that report physiological data in real time enabled the use of additional information and the development of multivariable AP systems. Progress in long-term stable glucagon research enabled the development of dual-hormone AP system designs. Advances in smartphones and communications technologies, and in control theory contributed to the development of powerful control algorithms that can be executed on smartphones and computational capabilities installed in insulin pump systems. Techniques in system monitoring and supervision, fault detection and diagnosis, and performance assessment enabled advanced diagnostics and fault-tolerant control technologies for AP systems. The goal of this book is to introduce recent developments and directions for future progress in AP systems. The material covered represents a culmination of several years of theoretical and applied research carried out by the authors and many prominent research groups around the world. The book starts with some historical background on diabetes and AP systems.The heart of the AP system - sophisticated algorithms that function on a smartphone or similar device - collects information from the sensor of a continuous glucose monitor and wearable devices, computes the optimal insulin dose to infuse and instructs the insulin pump to deliver it. The early chapters of the book provide information about currently available devices, techniques and algorithms to develop AP systems. Then, several factors such as meals, exercise, stress and sleep (MESS) that challenge AP systems are discussed. In later chapters, both empirical (data-driven) and first principles based modeling techniques are presented. Recursive modeling techniques that enable adaptive control of the AP are introduced and integrated with multiple-input models used in adaptive control. Different control strategies such as model predictive, proportional-integral-derivative, generalized predictive, and fuzzy-logic control are introduced. Physiological variables that can provide additional information to enable feedforward action to deal with MESS challenges are proposed. Several additional modules to address the challenges of MESS factors are discussed and a multi-module adaptive multivariable AP system is described. Fault detection and reconciliation of missing or erroneous data and assessment of controller performance are presented to develop modules for fault-tolerant operation of an AP. A summary of recent clinical studies is provided and the directions of future developments is discussed. Over 300 references are listed to provide a database of publications in many AP-related areas.
... Models serve as knowledge representation of a large amount of structural, functional and behavioral information and their relationship [3,4,5,11,22,33,32,36,37]. This knowledge representation is used to create complex cause-effect reasoning leading to construction of powerful and robust au- tomatic diagnosis and isolation systems [40,1,9,12,13,14,15,42]. Qualitative reasoning by using bond graphs can be conducted to construct intelli- gent supervisory control systems [23,43]. ...
Chapter
Most of the early approaches for fault diagnosis and isolation were rule based. Such approaches use simple prediction rules to provide possible faults in a system and their causes. These methods suffer from incompleteness and inflexibility. Recent fault diagnosis methods are based on analysis of the underlying model structures and behavior of a system. Models serve as knowledge representation of a large amount of structural, functional and behavioral information and their relationship [8, 16, 50, 93, 139, 197, 198, 237, 239]. This knowledge representation is used to create complex cause-effect reasoning leading to construction of powerful and robust automatic diagnosis and isolation systems [2, 76, 98, 99, 114, 115, 261, 267].
... For process history based techniques, a large amount of historical process data is needed to create a database of fault patterns, then compute statistical limits that indicate the significance of deviations in sensor readings 22 . Qualitative model-based techniques have also been integrated with data-driven techniques to leverage the power of multivariate statistical approaches and knowledge-based systems 27 . A different paradigm is to develop robust control systems that can tolerate sensor errors. ...
Article
Full-text available
Supervision and control systems rely on signals from sensors to receive information to monitor the operation of a system and adjust manipulated variables to achieve the control objective. However, sensor performance is often limited by their working conditions and sensors may also be subjected to interference by other devices. Many different types of sensor errors such as outliers, missing values, drifts and corruption with noise may occur during process operation. A hybrid online sensor error detection and functional redundancy system is developed to detect errors in online signals, and replace erroneous or missing values detected with model-based estimates. The proposed hybrid system relies on two techniques, an outlier-robust Kalman filter (ORKF) and a locally-weighted partial least squares (LW-PLS) regression model, which leverage the advantages of automatic measurement error elimination with ORKF and data-driven prediction with LW-PLS. The system includes a nominal angle analysis (NAA) method to distinguish between signal faults and large changes in sensor values caused by real dynamic changes in process operation. The performance of the system is illustrated with clinical data continuous glucose monitoring (CGM) sensors from people with type 1 diabetes. More than 50,000 CGM sensor errors were added to original CGM signals from 25 clinical experiments, then the performance of error detection and functional redundancy algorithms were analyzed. The results indicate that the proposed system can successfully detect most of the erroneous signals and substitute them with reasonable estimated values computed by functional redundancy system.
... Analysis of contribution plots can be automated and linked with fault diagnosis by using real-time knowledge-based systems (KBS). The integration of statistical detection tools and contribution plots with fault diagnosis by using a supervisory KBS has been illustrated for both continuous (Norvilas et al. 2000) and batch processes (Undey et al. 2003a(Undey et al. , 2004). ...
Book
In an age of heightened nutritional awareness, assuring healthy human nutrition and improving the economic success of food producers are top priorities for agricultural economies. In the context of these global changes, new innovative technologies are necessary for appropriate agro-food management from harvest and storage, to marketing and consumer consumption. Optical Monitoring of Fresh and Processed Agricultural Crops takes a task-oriented approach, providing essential applications for a better understanding of non-invasive sensory tools used for raw, processed, and stored agricultural crops. This authoritative volume presents interdisciplinary optical methods technologies feasible for in-situ analyses, such as: Vision systems VIS/NIR spectroscopy Hyperspectral camera systems Scattering Time and spatial-resolved approaches Fluorescence Sensorfusion Written by an Internationally Recognized Team of Experts Using a framework of new approaches, this text illustrates how cutting-edge sensor tools can perform rapid and non-destructive analysis of biochemical, physical, and physiological properties, such as maturity stage, nutritional value, and neoformed compounds appearing during processing. These are critical components to maximizing nutritional quality and safety of fruits and vegetables and decreasing economic losses due to produce decay. Quality control systems are quickly gaining a foothold in food manufacturing facilities, making Optical Monitoring of Fresh and Processed Agricultural Crops a valuable resource for agricultural technicians and developers working to maintain nutritional product value and approaching a fine-tuned control process in the crop supply chain.
... Marsh and Tucker (1991) recognised that the process variable measurements taken during a batch run, although transient in nature, do follow a certain dynamic pattern, and they proposed a simple SPC technique for monitoring a single measurement variable. Afterward, Nomikos and MacGregor (1995), Lennox et al. (2000), Undey et al. (2003) and many others, proposed MSPC methods for the analysis and online monitoring of batch processes. They assumed that the only information needed to develop these methods is a historical data base on measured process variable trajectories from past successful batches. ...
Article
Full-text available
The aim of this research is to expand a framework that integrates two important concepts: statistical process control (SPC) and engineering process control (EPC). Most of the literature researches on integrated SPC/EPC systems are focused into continuous process mainly with algorithmic SPC. The integrated SPC/EPC systems in batch process control have not received the same degree of attention. This paper is a first of its kind in integrated SPC/EPC systems that applied in batch process. The proposed SPC/EPC integration is performed continually in active SPC for the batch making advance and in run-to-run (RTR) control action between batches. As a validation step, the proposed approach is applied to an industrial batch alkyd polymerisation reactor. Through this case study application, process engineers at the company are now able to use a valuable decision making tool when the production process is affected by certain disruptions, with obvious consequences on product quality, productivity and competitiveness.
... Although contribution plots in multivariate statistical process monitoring (MSPM) techniques can potentially identify the contributing variables for a novel fault (Qin, 2003), they do not explain the root cause for the identified contributing variables (Venkatasubramanian et al., 2003c). Hybrid diagnosis approaches (Leung and Romagnoli, 2002; Musulin et al., 2006; Ündey et al., 2003; Zumoffen and Basualdo, 2008), which integrate the techniques mentioned above suffer from similar problems when faced with novel faults. Among existing fault diagnosis methods, causal digraphbased reasoning (CDR) uses only cause–effect knowledge about normal situations and is suitable for diagnosing novel faults (Venkatasubramanian et al., 2003b). ...
Article
This paper investigates the challenging problem of diagnosing novel faults whose fault mechanisms and relevant historical data are not available. Most existing fault diagnosis systems are incapable to explain root causes for unanticipated, novel faults, because they rely on either models or historical data of known faulty conditions. To address this issue we propose a new framework for novel fault diagnosis, which integrates causal reasoning on signed digraph models with multivariate statistical process monitoring. The prerequisites for our approach include historical data of normal process behavior and qualitative cause–effect relationships that can be derived from process flow diagrams. In this new approach, a set of candidate root nodes is identified first via qualitative reasoning on signed digraph; then quantitative local consistency tests are implemented for each candidate based on multivariate statistical process monitoring techniques; finally, using the resulting multiple local residuals, diagnosis is performed based on the exoneration principle. The cause–effect relationships in the digraph enable automatic variable selection and the local residual interpretations for statistical monitoring. The effectiveness of this new approach is demonstrated using numerical examples based on the Tennessee Eastman process data.
... An automated technique to monitor the sensor data and diagnose faults can significantly improve the management of abnormal situations. Many methods have been developed over the past few decades, such as statistical process control (SPC), multivariate statistical process control (MSPC), principal component analysis/partial least squares (PCA/PLS) [2][3][4], neural networks, and expert systems [5,6]. The known method for process monitoring is principal component or independent components analysis (PCA/ICA) [7,8]. ...
Article
Alarm overload in modern chemical plants presents many difficulties in decision and diagnosis. Management and optimization of alarm information are challenging work that must be confronted everyday. A new system alarm optimization technique, based on a fuzzy clustering–ranking (FCR) algorithm, is proposed according to the correlativity among process-measured variables. The fuzzy clustering method is used to rationally group and cluster the information matrix of alarm variables to effectively decrease alarms under safety production. Moreover, the fuzzy difference driving (FDD) algorithm is used to rank the clustering center and alarm variables in every cluster, based on objective process characteristics. Furthermore, the validity of the proposed algorithm and solution is verified by application of a practical ethylene cracking furnace alarm system. The proposed method is an effective and reliable alarm-management method that can optimize process operation and improve plant safety in the chemical industry. © 2005 American Institute of Chemical Engineers Process Saf Prog, 2005
... Reference [37] recognized that the process variable measurements taken during a batch run, although transient in nature, do follow a certain dynamic pattern, and they proposed a simple SPC technique for monitoring a single measurement variable. Afterward, references [9] [38] [39], and many others, proposed multivariate SPC (MSPC) methods for the analysis and on-line monitoring of batch processes. They assumed that the only information needed to develop these methods is a historical data base on measured process variable trajectories from past successful batches. ...
Article
Full-text available
The objective of this paper is to develop a framework that integrates two important concepts: Statistical process control (SPC) and engineering process control (EPC). Most of the literature researches on integrated SPC/EPC systems are focused into continuous process mainly with Algorithmic SPC. The integrated SPC/EPC systems in batch process control have not received the same degree of attention. In particular, there is an only Run-to-Run (RTR) control methodology application which is mostly focused in semiconductor industry. This paper is a first of its kind in integrated SPC/EPC systems that applied in batch process and based on data-driven quality improvement tools. The proposed SPC/EPC integration is performed continually in two successive phases: (1) Active SPC for the batch making advance, and (2) RTR control action between batches. Control limits for critical variables are developed using information from the historical reference distribution of past successful batches. EPC application is based on the development of progressive knowledge-based rules. For a validation purpose, the proposed approach is applied to data collected from an industrial batch alkyd polymerization reactor which evolution is monitored by measuring the overflow water weight, the acidity index and the viscosity of samples withdrawn from the reactor. This industrial process is poorly automated, subject to several disturbances, and the batches have uneven lengths. The synthesis is stopped at the maximum yield allowed by the gelation point of the cold product. Through this case study application, process engineers at the company are now able to use a valuable decision making tool when the production process is affected by certain disruptions, with obvious consequences on product quality, productivity and competitiveness.
... Deviations in process variables during the progress of a batch can provide information about product properties and an estimation of the quality of the final product well before the completion of the batch. Process monitoring and fault diagnosis have been very effective in achieving this goal of process supervision [16]. More specifically, multi-way principal component analysis (MPCA) has been successfully applied to batch processes to monitor the process, identify when it shifts to a new operating condition and detect and diagnose abnormalities [12]. ...
Article
Full-text available
This paper investigates fault diagnosis in batch processes and presents a comparative study of feature extraction and classification techniques applied to a specific biotechnological case study: the fermentation process model by Birol et al. (Comput Chem Eng 26:1553-1565, 2002), which is a benchmark for advanced batch processes monitoring, diagnosis and control. Fault diagnosis is achieved using four approaches on four different process scenarios based on the different levels of noise so as to evaluate their effects on the performance. Each approach combines a feature extraction method, either multi-way principal component analysis (MPCA) or multi-way independent component analysis (MICA), with a classification method, either artificial neural network (ANN) or support vector machines (SVM). The performance obtained by the different approaches is assessed and discussed for a set of simulated faults under different scenarios. One of the faults (a loss in mixing power) could not be detected due to the minimal effect of mixing on the simulated data. The remaining faults could be easily diagnosed and the subsequent discussion provides practical insight into the selection and use of the available techniques to specific applications. Irrespective of the classification algorithm, MPCA renders better results than MICA, hence the diagnosis performance proves to be more sensitive to the selection of the feature extraction technique.
Article
Accurate and effective equipment state assessment can help us to keep abreast of equipment operation reliability in performing emergency management of equipment faults. However, the state change of a running system is a dynamic process, and the occurrences of faults are a random process. Therefore, the dynamic characteristics of a system should be identified and real-time data should be used to establish a state evaluation model. This paper proposes a dynamic reliability assessment method of equipment based on process capability index (PCI) and fault importance index (FII) to identify the dynamic performance of equipment, evaluate the stability of process data, and analyze the structural importance of the functional components in the equipment. We present a fault importance calculation method and a comprehensive PCI (CPCI) method based on structural importance. These methods are applied to a synchrotron cooling water system in Shanghai Proton and Heavy Ion Center. Result shows that the dynamic reliability assessment model based on PCI and FII can effectively identify the equipment dynamic reliability change.
Article
The Engineering Applications of Artificial Intelligence (EAAI) is a journal of very high repute in the domain of Engineering and Computer Science. This paper gives a broad view of the publications in EAAI from 1988–2018, which are indexed in Web of Science (WoS) and Scopus. The main purpose of this research is to bring forward the prime impelling factors that bring about the EAAI publications and its citation structure. The publication and citation structure of EAAI is analyzed, which includes the distribution of publication over the years, citations per year and a bird’s eye view of the citation structure. Then the co-citation analysis and over the year’s trend of top keywords is given. The co-authorship networks and a geographic analysis of the sources is also provided. Further, a country-wise temporal and quantitative analysis of the publications is given along with the highly cited documents among the EAAI publications.
Book
Use of a membrane within a bioreactor (MBR), either microbial or enzymatic, is a technology that has existed for 30 years to increase process productivity and/or facilitate the recovery and the purification of biomolecules. Currently, this technology is attracting increasing interest in speeding up the process and in better sustainability. In this work, we present the current status of MBR technologies. Fundamental aspects and process design are outlined and emerging applications are identified in both aspects of engineering, i.e., enzymatic and microorganism (bacteria, animal cells, and microalgae), including microscale aspects and wastewater treatment. Comparison of this integrated technology with classical batch or continuous bioreactors is made to highlight the performance of MBRs and identify factors limiting their performance and the different possibilities for their optimization.
Article
Full-text available
Artificial pancreas (AP) control systems rely on signals from glucose sensors to collect glucose concentration (GC) information from people with Type 1 diabetes and compute insulin infusion rates to maintain GC within a desired range Sensor performance is often limited by sensor errors, communication interruptions and noise A hybrid online sensor error detection and functional redundancy system is developed to detect errors in online signals, and replace erroneous or missing values detected with model-based estimates. The proposed hybrid system relies on two techniques, an outlier-robust Kalman filter (ORKF) and a locally-weighted partial least squares (LW-PLS) regression model. This leverages the advantages of automatic measurement error elimination with ORKF and data-driven prediction with LW-PLS. A novel method called nominal angle analysis is proposed to distinguish between signal faults and large changes in sensor values caused by real dynamic changes in the metabolism The performance of the system is illustrated with clinical data from continuous glucose monitoring sensors collected from people with Type 1 diabetes.
Article
Two adaptive principal component analysis methods are improved based on adaptive extracting principal components (PCs) in process monitoring: recursive PCA (RPCA) and moving window PCA (MWPCA). An adaptive extracting PC algorithm is proposed using the threshold method based on the score rule in sport games to determine the number of PCs in real time. It can effectively overcome the shortcomings of the conventional cumulative percent variance method in obtaining the number of PCs. Moreover, two improved RPCA and MWPCA methods are proposed using the new threshold method to monitor an industrial process online. Similary to the forgetting factor in RPCA, an optimal variable moving window size is selected, adding forgetting factors into the data samples and covariance matrices, respectively. The results show the validity of improvements compared with the original RPCA and MWPCA in Tennessee Eastman process monitoring.
Article
A novel networked process monitoring, fault propagation identification, and root cause diagnosis approach is developed in this study. First, process network structure is determined from prior process knowledge and analysis. The network model parameters including the conditional probability density functions of different nodes are then estimated from process operating data to characterize the causal relationships among the monitored variables. Subsequently, the Bayesian inference‐based abnormality likelihood index is proposed to detect abnormal events in chemical processes. After the process fault is detected, the novel dynamic Bayesian probability and contribution indices are further developed from the transitional probabilities of monitored variables to identify the major faulty effect variables with significant upsets. With the dynamic Bayesian contribution index, the statistical inference rules are, thus, designed to search for the fault propagation pathways from the downstream backwards to the upstream process. In this way, the ending nodes in the identified propagation pathways can be captured as the root cause variables of process faults. Meanwhile, the identified fault propagation sequence provides an in‐depth understanding as to the interactive effects of faults throughout the processes. The proposed approach is demonstrated using the illustrative continuous stirred tank reactor system and the Tennessee Eastman chemical process with the fault propagation identification results compared against those of the transfer entropy‐based monitoring method. The results show that the novel networked process monitoring and diagnosis approach can accurately detect abnormal events, identify the fault propagation pathways, and diagnose the root cause variables. © 2013 American Institute of Chemical Engineers AIChE J, 59: 2348–2365, 2013
Article
There is a widespread assumption that batch synchronization is only required if the batch trajectories have different duration. This paper is devoted to demonstrate that synchronization is a critical and necessary preliminary step to bilinear batch process modeling, no matter whether batch trajectories have equal length or not. Another practical assumption is that all batches need the same synchronization method to be aligned. Two different synchronization approaches are compared in terms of synchronization quality: the Multisynchro approach that takes into account the type of asynchronism and the method based on linearly expanding and/or compressing pieces of variables trajectories (the TLEC method), implemented in commercial software. The consequences of inappropriately synchronizing batch data with multiple asynchronisms in process monitoring are investigated. For this study, the observationwise unfolding-T scores batchwise unfolding (OWU-TBWU) approach, which integrates the TLEC method for batch synchronization, is used for process modeling. Data from realistic simulations of a fermentation process of the Saccharomyces cerevisiae cultivation with five different types of asynchronism are used for illustration.
Chapter
The concepts of quantitative fault diagnosis and fault tolerant control are introduced in this chapter. After a thorough introduction, the notion of analytical redundancy relations (ARRs), residuals, and structural analysis for fault diagnosis and isolation (FDI) are presented. Causal structure of bond graph model is exploited to derive the ARRs which are analyzed in real-time for FDI. Fault accommodation is performed through system reconfiguration so that suitably chosen redundant devices are activated in place of faulty components. Such management of operating modes is handled through a well-developed algorithm based on availability of healthy components and their functional associations to achieve the desired objectives. An example application concerning actuator failure in an electric vehicle is considered. In the next step, diagnosis of uncertain parameter systems is discussed where uncertainties are included in the ARRs so that false alarms and misdetections can be avoided. Parametric uncertainties are modeled through linear fractional transformation in bond graph form and bounding adaptive thresholds are derived for residual signals. This approach leads to robust diagnosis of uncertain systems which is demonstrated through an example mechatronic system application.
Book
Acting as a support resource for practitioners and professionals looking to advance their understanding of complex mechatronic systems, Intelligent Mechatronic Systems explains their design and recent developments from first principles to practical applications. Detailed descriptions of the mathematical models of complex mechatronic systems, developed from fundamental physical relationships, are built on to develop innovative solutions with particular emphasis on physical model-based control strategies. Following a concurrent engineering approach, supported by industrial case studies, and drawing on the practical experience of the authors, Intelligent Mechatronic Systems covers range of topic and includes: An explanation of a common graphical tool for integrated design and its uses from modeling and simulation to the control synthesis Introductions to key concepts such as different means of achieving fault tolerance, robust overwhelming control and force and impedance control Dedicated chapters for advanced topics such as multibody dynamics and micro-electromechanical systems, vehicle mechatronic systems, robot kinematics and dynamics, space robotics and intelligent transportation systems Detailed discussion of cooperative environments and reconfigurable systems Intelligent Mechatronic Systems provides control, electrical and mechanical engineers and researchers in industrial automation with a means to design practical, functional and safe intelligent systems.
Article
A novel framework for process pattern construction and multi-mode monitoring is proposed. To identify process patterns, the framework utilizes a clustering method that consists of an ensemble moving window strategy along with an ensemble clustering solutions strategy. A new k-independent component analysis–principal component analysis (k-ICA–PCA) modeling method captures the relevant process patterns in corresponding clusters and facilitates the validation of ensemble solutions. Following pattern construction, the proposed framework offers an adjoined multi-ICA–PCA model for detection of faults under multiple operating modes. The Tennessee Eastman (TE) benchmark process is used as a case study to demonstrate the salient features of the method. Specifically, the proposed method is shown to have superior performance compared to the previously reported k-PCA models clustering approach.
Article
This paper deals with the process monitoring strategy for a Steel Making Shop (SMS). The process and the feedstock characteristics of the SMS were being simultaneously monitored for the detection of an upset condition or an out-of-control situation. Partial Least Squares Regression (PLSR), a multivariate projection-based technique was used for the development of the process representation. Henceforth, T² chart was used to monitor the process and the feedstock characteristics and the out-of-control observations were diagnosed with the aid of contribution plots. Contribution plots revealed the characteristic or the combination of the characteristics responsible for an out-of-control observation. Multivariate Hotelling's T² chart was also used for monitoring of the process and feedstock characteristics and the results thus obtained were compared with that of the PLSR-based T² chart. Data pertaining to the process and feedstock characteristics were collected for a period of six months. The PLSR-based T² chart was able to detect the out-of-control observations and the contribution plots aided in revealing the set of characteristics responsible for the out-of-control observations.
Article
This paper illustrates the process monitoring strategy for a multistage manufacturing facility with the aid of cluster analysis and multiple multi-block partial least squares (MBPLS) models. Traditionally, a single MBPLS model is used for monitoring multiple process and quality characteristics. However, modelling all the responses together in a single model may cause poor model fit in the events of: (i) uncorrelated response variables; and (ii) groups of response variables having high correlation amongst the variables within a group but no or negligible correlations between the groups. This paper overcomes this problem by combining cluster analysis with MBPLS through development of multiple MBPLS models. Each of the MBPLS models is used to detect out-of-control observations and a superset of the out-of-control observations is created. Two new fault diagnostic statistics for stage-wise and variable-wise contribution are developed for the superset. The developed methodology is applied to a steel making shop for monitoring. The case study results show that the proposed methodology performs better as compared to the traditionally employed single MBPLS model.
Article
Pharmaceutical processes are complex and highly variable in nature. The complexity and variability associated with these processes result in inconsistent and sometimes unpredictable process outcomes. To deal with the complexity and understand the causes of variability in these processes, in-depth knowledge and thorough understanding of the process and the various factors affecting the process performance become critical. This makes knowledge management and process monitoring an indispensable part of the process improvement efforts for any pharmaceutical organization. Graphical Abstract
Article
An adaptive agent-based hierarchical framework for fault type classification and diagnosis in continuous chemical processes is presented. Classification techniques such as Fisher’s discriminant analysis (FDA) and partial least-squares discriminant analysis (PLSDA) and diagnosis tools such as variable contribution plots are used by agents in this supervision system. After an abnormality is detected, the classification results reported by different diagnosis agents are summarized via a performance-based criterion, and a consensus diagnosis decision is formed. In the agent management layer of the proposed system, the performances of diagnosis agents are evaluated under different fault scenarios, and the collective performance of the supervision system is improved via performance-based consensus decision and adaptation. The effectiveness of the proposed adaptive agent-based framework for the classification of faults is illustrated using a simulated continuous stirred tank reactor (CSTR) network.
Article
The wavelet theory and multiscale method has generated an interest for fault monitoring and control in petrochemical processes. Principal component analysis (PCA) has been used successfully as a multivariate statistical process tool for detecting faults by extracting feature information from complex petrochemical data. The traditional linear PCA (LPCA) is restricted to complicated nonlinear systems; therefore, an adaptive nonlinear PCA (NLPCA) that is based on an improved input training neural network (IT-NN) is presented. A momentum factor and adaptive learning rates are added into the learning algorithm, to improve the training speed of the IT-NN. A novel method of wavelet-based adaptive multiscale nonlinear PCA (MS−NLPCA) is proposed for process signal monitoring. It can effectively monitor the slow and feeble changes of fault signals that cannot be monitored by conventional PCA, and yet detect early faults to yield a minimum rate of false alarms. The validity of the proposed approach has been proved by experimental simulations and practical application.
Article
This study involves real-time monitoring and fault diagnosis in batch baker's yeast fermentation. A specific Real Time Statistical Process Analysis and Control (RT-SPAC) program was developed to monitor instantaneous reaction conditions. The air flow rate fed to the reactor, temperature, pH, and dissolved oxygen concentration in a laboratory-size fermenter were monitored and recorded by means of on-line sensors. Under control of the RT-SPAC program, 22 batch baker's yeast fermentation operations were carried out. In the first 20 operations, an ordinary process was followed under previously defined nominal operating conditions. Historical data collected from these batches were then used for on-line Dynamic Principal Component Analysis (DPCA) in the course of the following two batches. The last two batches were implemented such that some deliberate faults (in temperature and pH) were introduced during the operation. The results indicated that the software was capable of capturing the process faults, and furthermore the possible causes of these faults were identified by contribution plots.
Article
Soft sensors based on multivariate statistical models are used very frequently for the monitoring of batch processes. From the moment of model calibration onward, the model is usually assumed to be time-invariant. Unfortunately, batch process conditions are subject to several events that make the correlation structure between batches change with respect to that of the original model. This can determine a decay of the soft sensor performance, unless periodic maintenance (i.e., updating) of the model is carried out. This article proposes a methodology for the automatic maintenance of PLS soft sensors in batch processing. Whereas the adaptation scheme usually follows chronological order in classical recursive updating, the proposed strategy defines the reference data set for model recalibration as the set of batches (nearest neighbors) that are most similar to the currently running batch. The nearest neighbors to a running batch are identified during the initial evolution of the batch following a concept of proximity in the latent space of principal components. In this way, for any new batch to be run, a model can be tailored on the running batch itself. The effectiveness of the proposed updating methodology is evaluated in two case studies related to the development of adaptive soft sensors for real-time product quality monitoring: a simulated fed-batch process for the production of penicillin and an industrial batch polymerization process for the production of a resin.
Article
Complex processes involve many process variables, and operators faced with the tasks of monitoring, control, and diagnosis of these processes often find it difficult to effectively monitor the process data, analyse current states, detect and diagnose process anomalies, or take appropriate actions to control the processes. The complexity can be rendered more manageable provided important underlying trends or events can be identified based on the operational data (Rengaswamy and Venkatasubramanian, 1992. An Integrated Framework for Process Monitoring, Diagnosis, and Control Using Knowledge-based Systems and Neural Networks. IFAC, Delaware, USA, pp. 49–54.). To assist plant operators, decision support systems that incorporate artificial intelligence (AI) and non-AI technologies have been adopted for the tasks of monitoring, control, and diagnosis. The support systems can be implemented based on the data-driven, analytical, and knowledge-based approach (Chiang et al., 2001. Fault Detection and Diagnosis in Industrial Systems. Springer, London, Great Britain). This paper presents a literature survey on intelligent systems for monitoring, control, and diagnosis of process systems. The main objectives of the survey are first, to introduce the data-driven, analytical, and knowledge-based approaches for developing solutions in intelligent support systems, and secondly, to present research efforts of four research groups that have done extensive work in integrating the three solutions approaches in building intelligent systems for monitoring, control and diagnosis. The four main research groups include the Laboratory of Intelligent Systems in Process Engineering (LISPE) at Massachusetts Institute of Technology, the Laboratory for Intelligent Process Systems (LIPS) at Purdue University, the Intelligent Engineering Laboratory (IEL) at the University of Alberta, and the Department of Chemical Engineering at University of Leeds. The paper also gives some comparison of the integrated approaches, and suggests their strengths and weaknesses.
Article
Full-text available
Although batch processes are “simple” in terms of equipment and operation design, it is often difficult to ensure consistently high product quality. The aim of this PhD project is the development of multivariate statistical methodologies for the realtime monitoring of quality in batch processes for the production of high value added products. Two classes of products are considered: those whose quality is determined by chemical/physical characteristics, and those where surface properties define quality. In particular, the challenges related to the instantaneous estimation of the product quality and the realtime prediction of the time required to manufacture a product in batch processes are addressed using multivariate statistical techniques. Furthermore, novel techniques are proposed to characterize the surface quality of a product using multiresolution and multivariate image analysis. For the first class of products, multivariate statistical soft sensors are proposed for the real-time estimation of the product quality and for the online prediction of the length of batch processes. It is shown that, to the purpose of realtime quality estimation, the complex series of operating steps of a batch can be simplified to a sequence of estimation phases in which linear PLS models can be applied to regress the quality from the process data available online. The resulting estimation accuracy is satisfactory, but can be substantially improved if dynamic information is included into the models. Dynamic information is provided either by augmenting the process data matrix with lagged measurements, or by averaging the process measurements values on a moving window of fixed length. The process data progressively collected from the plant can be exploited also by designing time-evolving PLS models to predict the batch length. These monitoring strategies are tested in a real-world industrial batch polymerization process for the production of resins, and prototypes of the soft sensor are implemented online. For products where surface properties define the overall quality, novel multiresolution and multivariate techniques are proposed to characterize the surface of a product from image analysis. After analyzing an image of the product surface on different levels of resolutions via wavelet decomposition, the application of multivariate statistical monitoring tools allow the in-depth examination of the product features. A two-level “nested” principal component analysis (PCA) model is used for surface roughness monitoring, while a new strategy based on “spatial moving window” PCA is proposed to analyze the shape of the surface pattern. The proposed approach identifies the abnormalities on the surface and localizes defects in a sensitive fashion. Its effectiveness is tested in the case of scanning electron microscope images of semiconductor surfaces after the photolithography process in the production of integrated circuits.
Article
This review article has been written for the journal, Biotechnology and Bioengineering, to commemorate the 70th birthday of Daniel I.C. Wang, who served as doctoral thesis advisor to each of the co-authors, but a decade apart. Key roots of the current PAT initiative in bioprocess monitoring and control are described, focusing on the impact of Danny Wang's research as a professor at MIT. The history of computer control and monitoring in biochemical processing has been used to identify the areas that have already benefited and those that are most likely to benefit in the future from PAT applications. Past applications have included the use of indirect estimation methods for cell density, expansion of on-line/at-line and on-line/in situ measurement techniques, and development of models and expert systems for control and optimization. Future applications are likely to encompass additional novel measurement technologies, measurements for multi-scale and disposable bioreactors, real time batch release, and more efficient data utilization to achieve process validation and continuous improvement goals. Dan Wang's substantial contributions in this arena have been one key factor in steering the PAT initiative towards realistic and attainable industrial applications.
Conference Paper
Full-text available
A new method of wavelet theory-based self-adaptive multi-scale principal component analysis (W-AMSPCA) is proposed for process signal monitoring and diagnosis. The technique uses the wavelet analysis to decompose the signals and reconstruct the signals in order to denoise with disturbance and outliers, and then uses the adaptive PCA algorithm 10 reduce the dimensions of process signals and identify different wavelet coefficients based on the multi-scale decomposition. The proposed method can early find and analyze the slow and feeble changes of process signals that can't be monitored by normal PCA. Furthermore, the performing framework and algorithm of W-AMSPCA for on-line signal monitoring and diagnosis are proposed.
Conference Paper
A novelty method of wavelet-based adaptive multiscale principal component analysis (MSPCA) is proposed for process signal acquisition and diagnosis. The wavelet transform is used to decompose the process signals and at the same time analyze the different scales signals based on multiresolution signal analysis, and then the signals are reconstructed in order to denoise and get rid of disturbances. The adaptive PCA algorithm is adopted to monitor and diagnose abnormal situations on the basis of the multiscale wavelet coefficients, analyze the slow and feeble changes of fault signals that can't be acquisition and monitored by conventional PCA. Furthermore, the theoretic framework and practical process of wavelet-based adaptive MSPCA algorithm about online process signals monitoring and diagnosis are also proposed. Experimental simulations and practical application results verify the validity and dependability of the proposed method.
Conference Paper
Full-text available
Batch and fed-batch bioprocesses generally exhibit batch-to-batch variation. Multivariate statistical monitoring of these processes based on the use of empirical models developed from the multiway principal component analysis was performed by using contribution, T2 and square prediction error plots. To cope with uncertainties in the fermentation process and to provide more effective supervision, a knowledge-based system was developed allowing the coupling of quantitative statistical information with the qualitative domain expertise (heuristic knowledge). This hybrid system (PenExpert) aims to perform old-of-batch process monitoring as well as online process monitoring and fault diagnosis
Article
Online expert systems are now an integral part of process automation. As a continuation of a two-part series focusing on the rationale for using such systems, this article presents a review of the specific benefits and learning points resulting from Lilly's use of this technology in its fermentation development and production plants.
Article
Statistical process control methods for monitoring processes with multivariate measurements in both the product quality variable space and the process variable space are considered. Traditional multivariate control charts based on X2 and T2 statistics are shown to be very effective for detecting events when the multivariate space is not too large or ill-conditioned. Methods for detecting the variable(s) contributing to the out-of-control signal of the multivariate chart are suggested. Newer approaches based on principal component analysis and partial least squares are able to handle large ill-conditioned measurement space; they also provide diagnostics which can point to possible assignable causes for the event. The me hods are illustrated on a simulated process of a high pressure low density polyethylene reactor, and examples of their application to a variety of industrial processes are referenced.
Article
The industrial application of a new monitoring scheme for batch and semi-batch processes is presented. Multi-way Principal Component Analysis is used to analyze the information from the on-line process measurements. The basic idea is to build a statistical model based on process measurements from past successful batches, which describes the normal operation of the process. Subsequently future batches are compared against this model and characterized as normal or abnormal. The algorithms and all the design equations are presented for setting up Statistical Process Control charts which monitor the performance of a batch process. Contribution plots for detected abnormal operations are developed to identify the measurement variables and time periods of abnormal operation.
Article
Due to sophisticated experimental designs and to modern instrumental constellations the investigation of N-dimensional (or N-way or N-mode) data arrays is attracting more and more attention. Three-dimensional arrays may be generated by collecting data tables with a fixed set of objects and variables under different experimental conditions, at different sampling times, etc. Stacking all the tables along varying conditions provides a cubic arrangement of data. Accordingly the three index sets or modes spanning a three-way array are called objects, variables and conditions. In many situations of practical relevance even higher-dimensional arrays have to be considered. Among numerous extensions of multivariate methods to the three-way case the generalization of principal component analysis (PCA) has central importance. There are several simplified approaches of three-way PCA by reduction to conventional PCA. One of them is unfolding of the data array by combining two modes to a single one. Such a procedure seems reasonable in some specific situations like multivariate image analysis, but in general combined modes do not meet the aim of data reduction. A more advanced way of unfolding which yields separate component matrices for each mode is the Tucker 1 method. Some theoretically based models of reduction to two-way PCA impose some specific structure on the array. A proper model of three-way PCA was first formulated by Tucker (so-called Tucker 3 model among other proposals). Unfortunately the Tucker 1 method is not optimal in the least squares sense of this model. Kroonenberg and De Leeuw demonstrated that the optimal solution of Tucker's model obeys an interdependent system of eigenvector problems and they proposed an iterative scheme (alternating least squares algorithm) for solving it. With appropriate notation Tucker's model as well as the solution algorithm are easily generalized to the N-way case (N > 3). There are some specific aspects of three-way PCA, such as complicated ways of data scaling or interpretation and simple-structure-transformation of a so-called core matrix, which make it more difficult to understand than classical PCA. An example from water chemistry serves as an illustration. Additionally, there is an application section demonstrating several rules of interpretation of loading plots with examples taken from environmental chemistry, analysis of complex round robin tests and contamination analysis in tungsten wire production.
Article
The industrial application of a new monitoring scheme for batch and semi-batch processes is presented. Multi-way Principal Component Analysis is used to analyze the information from the on-line process measurements. The basic idea is to build a statistical model based on process measurements from past successful batches, which describes the normal operation of the process. Subsequently future batches are compared against this model and characterized as normal or abnormal. The algorithms and all the design equations are presented for setting up Statistical Process Control charts which monitor the performance of a batch process. Contribution plots for detected abnormal operations are developed to identify the measurement variables and time periods of abnormal operation.
Article
The Lohmöller–Wold decomposition of multi-way (three-way, four-way, etc.) data arrays is combined with the non-linear partial least squares (NIPALS) algorithms to provide multi-way solutions of principal components analysis (PCA) and partial least squares modelling in latent variables (PLS). The decomposition of a multi-way array is developed as the product of a score vector and a loading array, where the score vectors have the same properties as those of ordinary two-way PCA and PLS. In image analysis, the array would instead be decomposed as the product of a loading vector and an image score matrix. The resulting methods are equivalent to the method of unfolding a multi-way array to a two-way matrix followed by ordinary PCA or PLS analysis. This automatically proves the eigenvector and least squares properties of the multi-way PCA and PLS methods. The methodology is presented; the algorithms are outlined and illustrated with a small chemical example.
Article
Many recent attempts to use expert systems for process fault diagnosis have included information derived from deep knowledge. This information is generally implemented as a rule-based expert system. Drawbacks of this architecture are a lack of generality, poor handling of novel situations, and a lack of transparency. An algorithm called the diagnostic model processor is introduced; it uses the satisfaction of model equations from process plants to arrive at the most likely fault condition. The method is generalized by the process model and diagnostic methodology being separated. The architecture addresses each of the shortcomings discussed. Experiments show that the methodology is capable of correctly identifying fault situations. Furthermore, information is derived from an a priori analysis technique, which is used to show the degree to which different faults can be discriminated based on the model equations available. The results of this analysis add further insight into the diagnoses provided by the diagnostic model processor.
Article
Multivariate statistical procedures for monitoring the progress of batch processes are developed. The only information needed to exploit the procedures is a historical database of past successful batches. Multiway principal component analysis is used to extract the information in the multivariate trajectory data by projecting them onto low-dimensional spaces defined by the latent variables or principal components. This leads to simple monitoring charts, consistent with the philosophy of statistical process control, which are capable of tracking the progress of new batch runs and detecting the occurrence of observable upsets. The approach is contrasted with other approaches which use theoretical or knowledge-based models, and its potential is illustrated using a detailed simulation study of a semibatch reactor for the production of styrene-butadiene latex.
Article
In chemical kinetics and batch processes K variables are measured on the batches at regular time intervals. This gives a J×K matrix for each batch (J time points times K variables). Consequently, a set of N normal batches gives a three-way matrix of dimension (N×J×K). The case when batches have different length is also discussed. In a typical industrial application of batch modelling, the purpose is to diagnose an evolving batch as normal or not, and to obtain indications of variables that together behave abnormally in batch process upsets. Other applications giving the same form of data include pharmaco-kinetics, clinical and pharmacological trials where patients (or mice) are followed over time, material stability testing and other kinetic investigations. A new approach to the multivariate modelling of three-way kinetic and batch process data is presented. This approach is based on an initial PLS analysis of the ((N×J)×K) unfolded matrix ((batch×time)×variables) with `local time' used as a single y-variable. This is followed by a simple statistical analysis of the resulting scores and results in multivariate control charts suitable for monitoring the kinetics of new experiments or batches. `Upsets' are effectively diagnosed in these charts, and variables contributing to the upsets are indicated in contribution plots. In addition, the degree of `maturity' of the batch can be as predicted vs. observed local time. The analysis of batch data with respect to various questions is discussed with respect to typical objectives, overview and summary, classification, and quantitative modelling. This is illustrated by an industrial example of yeast production.
Article
Simulation software based on a detailed unstructured model for penicillin production in a fed-batch fermentor has been developed. The model extends the mechanistic model of Bajpai and Reuss by adding input variables such as pH, temperature, aeration rate, agitation power, and feed flow rate of substrate and introducing the CO2 evolution term. The simulation package was then used for monitoring and fault diagnosis of a typical penicillin fermentation process. The simulator developed may be used for both research and educational purposes and is available at the web site: http://www.chee.iit.edu/~control/software.html.
Article
Recent advances in artificial intelligence have changed the fundamental assumptions upon which the progress of computer-aided process engineering (modeling and methodologies) during the last 30 yr has been founded. Thus, in certain instances, numerical computations today constitute inferior alternatives to qualitative and/or semi-quantitative models and procedures which can capture and utilize more broadly-based sources of knowledge. In this paper it will be shown how process development and design, as well as planning, scheduling, monitoring, analysis and control of process operations can benefit from improved knowledge-representation schemes and advanced reasoning control strategies. It will also be argued that the central challenge coming from research advances in artificial intelligence is "modeling the knowledge", i.e. modeling: (a) physical phenomena and the systems in which they occur; (b) information handling and processing systems; and (c) problem-solving strategies in design, operations and control. Thus, different strategies require different forms of declarative knowledge, and the success or failure of various design, planning, diagnostic and control systems depends on the extent of actively utilizable knowledge. Furthermore, this paper will outline the theoretical scope of important contributions from AI and what their impact has been and will be on the formulation and solution of process engineering problems.
Article
A design of a multivariate knowledge-based fault diagnosis system is described in this paper. The proposed design is based on a novel strategy, which integrates multivariate statistical process control (MSPC) monitoring into knowledge-based (KB) fault diagnosis both qualitatively and quantitatively using expert system technology. The integration mechanism mimics how process engineers combine their process knowledge with Principal Component (PC) score contribution, PC score deviation contribution and square predicted error (SPE) contribution of principal component analysis (PCA) projection in diagnosing anomaly. The system has been successfully implemented in G2 environment. A dynamic simulation of a continuous stirred tank reactor (CSTR) running a second order exothermic reaction was used to test the proposed system. Testing results clearly indicated that the system produces more contrasting probabilities between all possible exogenous causes and it can give accurate diagnosis when the process upsets were undetected by univariate monitoring.
Article
In this part of the paper, we review qualitative model representations and search strategies used in fault diagnostic systems. Qualitative models are usually developed based on some fundamental understanding of the physics and chemistry of the process. Various forms of qualitative models such as causal models and abstraction hierarchies are discussed. The relative advantages and disadvantages of these representations are highlighted. In terms of search strategies, we broadly classify them as topographic and symptomatic search techniques. Topographic searches perform malfunction analysis using a template of normal operation, whereas, symptomatic searches look for symptoms to direct the search to the fault location. Various forms of topographic and symptomatic search strategies are discussed.
Article
An intelligent process monitoring and fault diagnosis environment has been developed by interfacing multivariate statistical process monitoring (MSPM) techniques and knowledge-based systems (KBS) for monitoring multivariable process operation. The real-time KBS developed in G2 is used with multivariate SPM methods based on canonical variate state space (CVSS) process models. Fault detection is based on T 2 charts of state variables. Contribution plots in G2 are used for determining the process variables that have contributed to the out-of-control signal indicated by large T 2 values, and G2 Diagnostic Assistant (GDA) is used to diagnose the source causes of abnormal process behavior. The MSPM modules developed in Matlab are linked with G2. This intelligent monitoring and diagnosis system can be used to monitor multivariable processes with autocorrelated, crosscorrelated, and collinear data. The structure of the integrated system is described and its performance is illustrated by simulation studies.
Article
The increasing complexity of chemical plants have caused the chemical industry to look towards automated and structured approaches for identifying and diagnosing process abnormalities during the normal course of a plant's daily operation. One such approach is to make use of a knowledge-based expert system which can perform diagnostic analysis. Many of the recent attempts have focused on using compiled process knowledge, relating symptoms to causes represented as production rules in the knowledge base. Though this leads to real-time diagnostic efficiency, such expert systems lack flexibility to process changes and are incapable of diagnosing novel symptom combinations. The rule-based approaches also lead to knowledge bases that are difficult to develop and maintain as they lack structures that reflect higher-level organization of process knowledge. In this paper, we present a diagnostic methodology that provides the means to solve these problems. We advocate a diagnostic methodology that integrates compiled knowledge with deep-level knowledge, thus achieving diagnostic efficiency without sacrificing flexibility and reliability under novel circumstances. To formalize such an integration, we also propose an object-oriented two-tier knowledge base that houses process-specific compiled knowledge in the top-tier and process-general deep-level knowledge in the bottom-tier. The diagnostic reasoning effectively alternates between the two-tiers of knowledge for efficient and complete diagnosis. An important aspect of diagnostic reasoning is to be able to generate potential causes of the observed symptoms or faults as candidate malfunction hypotheses. We describe an agenda-based inference control algorithm that generates malfunction hypotheses by deriving them from structural and functional information of the process. We discuss the salient features of an expert system, called MODEX2, that has been implemented using these ideas.
Article
This paper discusses contribution plots for both the D-statistic and the Q-statistic in multivariate statistical process control of batch processes. Contributions of process variables to the D-statistic are generalized to any type of latent variable model with or without orthogonality constraints. The calculation of contributions to the Q-statistic is discussed. Control limits for both types of contributions are introduced to show the relative importance of a contribution compared to the contributions of the corresponding process variables in the batches obtained under normal operating conditions. The contributions are introduced for off-line monitoring of batch processes, but can easily be extended to on-line monitoring and to continuous processes, as is shown in this paper.
Article
Diagnosis in chemical processing plants is recognized as an activity in which efficiency is achieved through structure in both the knowledge and the problem-solving strategy. By exploiting this structure in diagnostic expert systems, an efficient methodology for navigating the solution space of possible plant malfunctions results. One approach to computationally describing this structure is in terms of a small, finite set of underlying tasks which comprise the diagnostic activity. Since the task descriptions are independent of a particular application, the integration of the tasks forms a framework which is generally applicable to diagnosis in the domain. Within the context of this approach, a framework for a diagnostic expert system in the chemical plant domain is shown to consist fundamentally of a primary task associated with plant sensors and an auxiliary task associated with product quality data. The framework provides a means of appropriately leveraging both compiled and model-based knowledge.
Article
Multivariate statistical procedures for monitoring the progress of batch processes are developed. Multi-way partial least squares (MPLS) is used to extract the information from the process measurement variable trajectories that is more relevant to the final quality variables of the product. The only information needed is a historical database of past successful batches. New batches can be monitored through simple monitoring charts which are consistent with the philosophy of statistical process control. These charts monitor the batch operation and provide on-line predictions of the final product qualities. Approximate confidence intervals for the predictions from PLS models are developed. The approach is illustrated using a simulation study of a styrene-butadiene batch reactor.
Article
The background and motivation for the construction of a fault detection and advisory system for an industrial fermentation process plant are described. Here, the knowledge extracted from the operators (implemented in the form of production rules) is integrated with multivariate data-based methods for fault detection. The industrial benefits arising from this integrated system include: (1) reduced variability, (2) increased mean performance levels, (3) reduced operator-training time and (4) knowledge management in the broader organization.
Article
This article describes the development of Multivariate Statistical Process Control (MSPC) procedures for monitoring batch processes and demonstrates its application with respect to industrial tylosin biosynthesis. Currently, the main fermentation phase is monitored using univariate statistical process control principles implemented within the G2 real-time expert system package. This development addresses integrating various process stages into a monitoring system and observing interactions among individual variables through the use of multivariate projection methods. The benefits of this approach will be discussed from an industrial perspective.
Article
Market demand places great emphasis in industry on product quality. Consequently, process monitoring and control have become important aspects of systems engineering. In this article we detail the results of a 2-year study focusing on the development of a condition monitoring system for a fed-batch fermentation system operated by Biochemie Gmbh in Austria. We also demonstrate the suitability and limitations of current state of the art technologies in this field and suggest novel modifications and configurations to improve their suitability for application to a fed-batch fermentation system.
Article
A knowledge-based system (KBS) was designed for automated system identification, process monitoring, and diagnosis of sensor faults. The real-time KBS consists of a supervisory system using G2 KBS development software linked with external statistical modules for system identification and sensor fault diagnosis. The various statistical techniques were prototyped in MATLAB, converted to ANSI C code, and linked with the G2 Standard Interface. The KBS automatically performs all operations of data collection, identification, monitoring, and sensor fault diagnosis with little or no input from the user. Navigation throughout the KBS is via menu buttons on each user-accessible screen. Selected process variables are displayed on charts showing the history of the variables over a period of time. Multivariate statistical tests and contribution plots are also shown graphically. The KBS was evaluated using simulation studies with a polymerization reactor through a nonlinear dynamic model. Both normal operation conditions as well as conditions of process disturbances were observed to evaluate the KBS performance. Specific user-defined disturbances were added to the simulation, and the KBS correctly diagnosed both process and sensor faults when present.
Batch Fermenta-tion: Modeling, Monitoring and Control. Marcel Dekker G2 Reference Manual
  • A Cinar
  • S Parulekar
  • C Undey
  • G Birol
  • Ma Cambridge
  • J Glassey
  • G Montague
  • P Mohan
Cinar, A., Parulekar, S., Undey, C., Birol, G., 2003. Batch Fermenta-tion: Modeling, Monitoring and Control. Marcel Dekker, New York, NY. Gensym Corporation, 2001. G2 Reference Manual. Cambridge, MA. Glassey, J., Montague, G., Mohan, P., 2000. Issues in the development of an industrial bioprocess advisory system. Trends in Biotechnol-ogy 18, 136–141.
Matlab, Version 6.1 (2001) www.mathworks.com, The MathWorks
  • Mathworks
Mathworks, 2001a. Matlab, Version 6.1 (2001) www.mathworks.com, The MathWorks, Inc. Natick, MA.
Matlab Compiler, Version 2.2 (2001) User's Guide, The MathWorks
  • Mathworks
Mathworks, 2001b. Matlab Compiler, Version 2.2 (2001) User's Guide, The MathWorks, Inc. Natick, MA.
A review of process fault detection and diagnosis. Part II
  • Venkatasubramanian
Diagnostic model processor
  • Petti