Article

Fault Diagnosis Systems An Introduction from Fault Detection to Fault Tolerance

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Incluye bibliografía e índice

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... A fault is an abnormal condition that may cause a reduction in, or loss of the capability of a functional unit to perform a required function [1]. Failure in the system is an event which corresponds to the first occurrence of the generated error. ...
... High-integrity systems must have the ability of fault tolerance, thus the faults are compensated in such a way that they do not lead to system failures. After the application of principles to improve the perfection of the components of the system, the addition to the considered module -one or more modulescan be added as backup modules in parallel configuration [1] (Fig. 4). ...
... In general, the function modules are controlled with fault detection capability of the system followed by a reconfiguration mechanism to switch off failed modules and to switch on spare modules (dynamic redundancy) [4]. Fig. 4 Scheme of a fault tolerant system with parallel function modules as redundance [1] System is able to recover from its last (actual) process failure only if has a record of its last (actual) process, and an information about last (actual) process status. Let us assume that there is one copy of process (e.g., on the second processor), where the actual process is running on first processor. ...
... Faults can develop during building HVAC systems' design, construction, and operation, resulting in excessive energy waste [1,2]. There are many types of faults, e.g., design, manufacturing, assembly, wrong operation, maintenance, hardware, software, and operator's faults; some faults that humans directly cause may be called errors [3]. In general, faults occurring in the service life of a building are divided into two main categories: hard and soft faults. ...
... Maintenance is understood as an action taken to retain a system in its designed operating condition or bring it back to the design condition. It extends the useful life of systems and ensures the optimum availability of installed equipment or equipment for emergency use [3]. Preventive or scheduled maintenance starts with inspection and incorporates actions such as cleaning, adjustment, lubrication, and replacement of small parts before they fail at predetermined intervals [3]. ...
... It extends the useful life of systems and ensures the optimum availability of installed equipment or equipment for emergency use [3]. Preventive or scheduled maintenance starts with inspection and incorporates actions such as cleaning, adjustment, lubrication, and replacement of small parts before they fail at predetermined intervals [3]. Unplanned maintenance is completed in emergencies to avoid immediate shutdowns or for safety purposes [3]. ...
Article
While the emphasis of fault detection and diagnostic (FDD) research has been on hard faults (e.g., stuck/leaking/broken valve or damper), soft faults or, in general, human errors account for a significant portion of faults occurring in variable air volume (VAV) air handling unit (AHU) systems. Human errors encompass a wide range of errors occurring in different components; however, this paper only focuses on VAV AHU control systems (sensors, actuators, and sequencing logic). This research identifies human errors made by technical professionals of the building industry, such as engineers, contractors, and operators, during VAV AHU control systems' design, construction, and operation phases through a literature review and interviews with specialists involved in control systems. First, the most common human-induced errors in VAV AHU control systems are classified, and examples from 11 interviews with industry professionals are listed for different building life cycle phases. Then the research gaps in terms of methods geared to detect, diagnose, assess, and correct these errors are identified. Finally, recommendations are provided for developing new FDD methods and tools to facilitate the detection of human errors while highlighting the need for training programs for control practitioners. This paper is a comprehensive resource to provide high-quality control service that results in reducing energy consumption, increasing occupants' thermal comfort and longer HVAC equipment life.
... A number of survey papers and books were written. For instance, [41,49,63,73,88] give an review in FDD methods, and [15-17, 69, 70, 84, 97, 135] provide a review on the existing FTC technologies. When a fault occurs in the system, the desired performance can be achieved sometimes by designing a robust controller. ...
... Therefore, it is very important to detect these faults. Typical examples of sensor faults are listed in [33] [106]: bias, drift, performance degradation (or loss of accuracy), sensor freezing, and calibration error, as illustrated in Figure 2 Based on the behavior of fault According to the time profiles of faults, they can be classified as abrupt, incipient, and intermittent fault [63], as shown in Figure 2.4, t f is the time of fault occurrence. x@tA a Ax@tA C Bu@tA C Lf l @tA y@tA a Cx@tA C Mf m @tA (2.4) where x@tA, u@tA, y@tA represent the state, input and output of the system, respectively. ...
... Since the 1970s, a number of FDD theories and methods have been developed, and many excellent survey papers were written, for example, [41,49,63,73,88]. In 2003, a comprehensive review on the development of FDD process has appeared in a series of papers including three parts [119][120][121], describing quantitative model-based methods, qualitative model-based methods, and process history based methods. ...
Thesis
Due to the increasing demand for higher safety and reliability of the dynamic system, fault detection and diagnosis (FDD), as well as fault tolerant control (FTC) are becoming effective methods to avoid breakdowns and disasters of major systems. Therefore, this thesis focuses on developing observer based fault diagnosis and fault tolerant control strategies for complex nonlinear systems. A case study on an intensified heat exchanger/reactor (HEX reactor) is proposed to illustrate and demonstrate the proposed fault tolerant control techniques.In chemical engineering field, an intensified HEX reactor is a multifunctional device that combines heat exchanger and chemical reactor in one hybrid unit. Thanks to its remarkable thermal and hydrodynamic performance, the intensified HEX reactor is a promising way to meet the increasing requirements for safer operating conditions and lower cost as well as energy waste in the chemical engineering field. However, undesirable failures, such as thermal runaway, and fouling in channels, still pose a great threat to such intensified process. To solve this, FDD and FTC schemes are needed to make it have a satisfactory performance even under the faulty situation.To start, a mathematical model of the HEX reactor is proposed. The effectiveness of the proposed modelling is proved by comparing its performances obtained by simulation with the experimental data. After that, classic observers are applied to the considered HEX reactor to find out a suitable observer for further fault diagnosis use. The maximum overshoot and settling time of the estimation error system are chosen as the criteria to compare the state estimation performance of each observer. Finally, adaptive observer presents a better performance with minimum overshoot and shortest settling time between the observer considered.To design a fault tolerant control strategy for the considered HEX reactor, a nominal control law based on the backstepping technique has been proposed firstly to guarantee the temperature of process fluid follows the desired value. And then, a fault is introduced. A bank of adaptive observers is used for fault detection, isolation and identification. Once the fault is isolated, and identified, the control law is reconstructed to make the system still satisfy the expected performance under the faulty case. Both dynamic fault and sensor fault are considered in this thesis. Moreover, an interval observer based FTC strategy is also proposed in the thesis. The main idea of the FTC scheme is the same as we used in the adaptive observer based FTC scheme, controller reconfiguration.After applying the proposed FTC strategies to the considered HEX reactor, their effectiveness has been demonstrated. Even though the system is influenced by a dynamic fault or sensor fault, the temperature of process fluid still provides a satisfactory tracking performance. Besides, the interval observer based FTC strategy has a faster fault isolation speed after comparing the performances of the proposed FTC strategies.
... The paper is organised as follows: Section 2 provides the background on CDFD and optimal transport. Section 3 discusses the application on the CSTR, presenting the 1. INTRODUCTION Faults are unpermitted deviation or anomalies of characteristic properties or variables in a system (Isermann, 2006). If not timely corrected, they might evolve into serious accidents and cause significant safety, environmental and economic impacts (Chiang et al., 2000). ...
... Faults are unpermitted deviation or anomalies of characteristic properties or variables in a system (Isermann, 2006). If not timely corrected, they might evolve into serious accidents and cause significant safety, environmental and economic impacts (Chiang et al., 2000). ...
Article
Full-text available
Fault diagnosis is a key task for developing safer control systems, especially in chemical plants. Nonetheless, acquiring good labeled fault data involves sampling from dangerous system conditions. A possible workaround to this limitation is to use simulation data for training data-driven fault diagnosis systems. However, due to modelling errors or unknown factors, simulation data may differ in distribution from real-world data. This setting is known as cross-domain fault diagnosis (CDFD). We use optimal transport for: (i) exploring how modelling errors relate to the distance between simulation (source) and real-world (target) data distributions, and (ii) matching source and target distributions through the framework of optimal transport for domain adaptation (OTDA), resulting in new training data that follows the target distribution. Comparisons show that OTDA outperforms other CDFD methods.
... In recent years, there has been a growing interest in fault-tolerant drive systems. This is understood to be a guarantee of system operation at least for a short period of time, despite the failure of selected parts of the system [1][2][3]. A standard approach to achieve fault tolerance is to equip the control system with explicit fault detection and compensation capabilities. ...
... However, FTC's strategies for the failure of frequency converters and sensors in particular are very similar for both PMSM and inductive drives. In general, FTC strategies can be classified as passive and active [1][2][3][4]. In the case of current sensor faults, which are considered in this research, only active methods are possible. ...
Article
Full-text available
In the modern induction motor (IM) drive system, the fault-tolerant control (FTC) solution is becoming more and more popular. This approach significantly increases the security of the system. To choose the best control strategy, fault detection (FD) and fault classification (FC) methods are required. Current sensors (CS) are one of the measuring devices that can be damaged, which in the case of the drive system with IM precludes the correct operation of vector control structures. Due to the need to ensure current feedback and the operation of flux estimators, it is necessary to immediately compensate for the detected damage and classify its type. In the case of the IM drives, there are individual suggestions regarding methods of classifying the type of CS damage during drive operation. This article proposes the use of the classical multilayer perceptron (MLP) neural network to implement the CS neural fault classifier. The online work of this classifier was coordinated with the active FTC structure, which contained an algorithm for the detection and compensation of failure of one of the two CSs used in the rotor field-oriented control (DRFOC) structure. This article describes this structure and the method of designing the neural fault classifier (NN-FC). The operation of the NN-FC was verified by simulation tests of the drive system with an integrated FTC strategy. These tests showed the high efficiency of the developed fault classifier operating in the post-fault mode after compensating the previously detected CS fault and ensuring uninterrupted operation of the drive system.
... Process faults significantly impact the profit of chemical plants. A fault in a dynamic system is an anomalous variation that results in the deviation of process state variables from its acceptable range of operation [24]. Since the effect of faults often propagates along the process it is imperative to detect them soon upon their occurrence. ...
... Lack of observability often arises due to low signal to noise ratio in the measurements used for FDD and the presence of feedback control [24]. The purpose of feedback controllers is partly to compensate for anomalous system variations which can mask the effects of certain faults. ...
Preprint
Full-text available
A Deep Neural Network (DNN) based algorithm is proposed for the detection and classification of faults in industrial plants. The proposed algorithm has the ability to classify faults, especially incipient faults that are difficult to detect and diagnose with traditional threshold based statistical methods or by conventional Artificial Neural Networks (ANNs). The algorithm is based on a Supervised Deep Recurrent Autoencoder Neural Network (Supervised DRAE-NN) that uses dynamic information of the process along the time horizon. Based on this network a hierarchical structure is formulated by grouping faults based on their similarity into subsets of faults for detection and diagnosis. Further, an external pseudo-random binary signal (PRBS) is designed and injected into the system to identify incipient faults. The hierarchical structure based strategy improves the detection and classification accuracy significantly for both incipient and non-incipient faults. The proposed approach is tested on the benchmark Tennessee Eastman Process resulting in significant improvements in classification as compared to both multivariate linear model-based strategies and non-hierarchical nonlinear model-based strategies.
... A first step towards improving reliability is ensuring that each agent constantly checks its own sensor readings. This is typically already considered in the area of fault detection [3]. However, sole reliance on this on-board fault detection (FD) is hazardous. ...
... II. RELATED WORK Fault detection, in general, is a broad field. An overview of classical FD algorithms can be found in [3] and [8]. In this work, we focus on detecting faults in networked multi-agent systems. ...
Preprint
Full-text available
The ability to detect faults is an important safety feature for event-based multi-agent systems. In most existing algorithms, each agent tries to detect faults by checking its own behavior. But what if one agent becomes unable to recognize misbehavior, for example due to failure in its onboard fault detection? To improve resilience and avoid propagation of individual errors to the multi-agent system, agents should check each other remotely for malfunction or misbehavior. In this paper, we build upon a recently proposed predictive triggering architecture that involves communication priorities shared throughout the network to manage limited bandwidth. We propose a fault detection method that uses these priorities to detect errors in other agents. The resulting algorithms is not only able to detect faults, but can also run on a low-power microcontroller in real-time, as we demonstrate in hardware experiments.
... Unjustifiable deviation, in particular, denotes the difference between a threshold value and a fault value, which might result in a process malfunction or failure. [11][12]. ...
... Availability: The likelihood that a system or electronic device will perform satisfactorily and effectively at any given time [11,14]. ...
Article
Full-text available
Fault detection and isolation has garnered a lot of attention in industrial systems during the last few decades, because these systems are growing fast and they need to improve their safety and reliability. Therefore, the number of research and scientific papers in the field of fault detection and isolation has increased. This paper presents a detailed survey of fault detection and isolation methods and reviews of scientific researches in this field. The survey discusses fault classification as well as a comprehensive review of fault detection and isolation approaches including model- free methods and model-based methods. Finally, residual generation and residual evaluation are explained
... This is despite the common knowledge of the disadvantages of alarm systems, such as the flood of alarms, long fault detection time, fault masking effects, etc. (Kościelny and Syfert (2014)). On the other hand, the advantages are apparent in real-time diagnostics using partial models ( Fig. 1) (Isermann (2006); Korbicz et al. (2004); Korbicz and Kościelny (2010)). This allows for precise recognition of the faults that have occurred and shortens the time of hazard recognition. ...
... This is despite the common knowledge of the disadvantages of alarm systems, such as the flood of alarms, long fault detection time, fault masking effects, etc. (Kościelny and Syfert (2014)). On the other hand, the advantages are apparent in real-time diagnostics using partial models (Fig. 1) (Isermann (2006); Korbicz et al. (2004); Korbicz and Kościelny (2010)). This allows for precise recognition of the faults that have occurred and shortens the time of hazard recognition. ...
Article
Full-text available
The paper is aimed to discuss the problem of fault isolation robustness. According to the authors, this problem has been so far underestimated and poorly discussed in the community of process diagnostics. It was pointed out that the lack of robustness of fault isolation maybe one of the main reasons limiting the broader application of advanced diagnostic systems. This was a strong motivation to undertake the research and evaluation of the robustness of fault isolation approaches. Therefore, a significant part of the paper was devoted to identifying and discussing the main reasons for the lack of fault isolation robustness. Also, for research and comparative purposes, the term robustness of fault isolation was defined together with a preliminary proposal for its estimation which supposedly may find industrial acceptance. Finally, a heuristic qualitative metric of fault isolation robustness has been proposed. It allows for a quick and straightforward assessment of the fault isolation robustness, which promises its practical usability.
... Hence, the knowledge in this field is already well-established. Many monographs (e.g., [35][36][37][38][39][40]), review articles (e.g., [41][42][43]) and a lot of works on specific issues have been written. Various diagnostic systems for industrial processes have been developed (e.g., [38,[44][45][46]). ...
Article
Full-text available
This paper is concerned with the issue of the diagnostics of process faults and the detection of cyber-attacks in industrial control systems. This problem is of significant importance to energy production and distribution, which, being part of critical infrastructure, is usually equipped with process diagnostics and, at the same time, is often subject to cyber-attacks. A commonly used approach would be to separate the two types of anomalies. The detection of process faults would be handled by a control team, often with a help of dedicated diagnostic tools, whereas the detection of cyber-attacks would be handled by an information technology team. In this article, it is postulated here that the two can be usefully merged together into one, comprehensive, anomaly detection system. For this purpose, firstly, the main types of cyber-attacks and the main methods of detecting cyber- attacks are being reviewed. Subsequently, in the analogy to “process fault”—a term well established in process diagnostics—the term “cyber-fault” is introduced. Within this context a cyber-attack is considered as a vector containing a number of cyber-faults. Next, it is explained how methods used in process diagnostics for fault detection and isolation can be applied to the detection of cyber-attacks and, in some cases, also to isolation of the components of such attacks, i.e., cyber-faults. A laboratory stand and a simulator have been developed to test the proposed approach. Some test results are presented, demonstrating that, similarly to equipment/process faults, residua can be established and cyber-faults can be identified based on the mismatch between the real data from the system and the outputs of the simulation model.
... The early and accurate detection of abnormal events is crucial for today's safety-critical systems with their continuously increasing complexity. Model based fault diagnosis has enjoyed considerable attention during the last three decades and provides a rich literature of techniques considering practical solutions for fault diagnosis [1][2][3][4][5][6][7]. High-fidelity modeling requirements for model based fault diagnosis make it difficult and sometimes infeasible to implement for the complex system. ...
Article
Full-text available
Due to the presence of actuator disturbances and sensor noise, increased false alarm rate and decreased fault detection rate in fault diagnosis systems have become major concerns. Various performance indexes are proposed to deal with such problems with certain limitations. This paper proposes a robust performance-index based fault diagnosis methodology using input–output data. That data is used to construct robust parity space using the subspace identification method and proposed performance index. Generated residual shows enhanced sensitivity towards faults and robustness against unknown disturbances simultaneously. The threshold for residual is designed using the Gaussian likelihood ratio, and the wavelet transformation is used for post-processing. The proposed performance index is further used to develop a fault isolation procedure. To specify the location of the fault, a modified fault isolation scheme based on perfect unknown input decoupling is proposed that makes actuator and sensor residuals robust against disturbances and noise. The proposed detection and isolation scheme is implemented on the induction motor in the experimental setup. The results have shown the percentage fault detection of 98.88%, which is superior among recent research.
... A Bayesian network (BN) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph [25]. Due to their good model transparency [26], BNs have been widely adopted for statistical representation and Bayesian inference. While BN inference depends on the collected evidence of network nodes, how to observe network nodes and collect evidence is a sequential decision-making problem [27]. ...
Article
Full-text available
System verification activities (VA) are used to identify potential errors and corrective activities (CA) are used to eliminate those errors. However, existing math-based methods to plan verification strategies do not consider decisions to implement VAs and perform CA jointly, ignoring their close interrelationship. In this paper, we present a joint verification-correction model to find optimal joint verification-correction strategies (JVCS). The model is constructed so that both VAs and CAs can be chosen as dedicated decisions with their own activity spaces. We adopt the belief model of Bayesian networks to represent the impact of VAs and CAs on verification planning and use three value factors to measure the performance of JVCSs. Moreover, we propose an order-based backward induction approach to solve for the optimal JVCS by updating all verification state values. A case study was conducted to show that our model can be applied to effectively solve the verification planning problem.
... A physical system, in its life cycle, can be subject to failures or malfunctions that can compromise its normal operation. It is therefore necessary to introduce a fault diagnosis system within a plant capable of preventing critical interruptions: This is called a fault diagnosis system and can identify the possible presence of a malfunction within the monitored system [26]. The search for the fault is one of the most important and qualifying maintenance intervention phases and it is necessary to act in a systematic and deterministic way. ...
Article
Full-text available
Preventive identification of mechanical parts failures has always played a crucial role in machine maintenance. Over time, as the processing cycles are repeated, the machinery in the production system is subject to wear with a consequent loss of technical efficiency compared to optimal conditions. These conditions can, in some cases, lead to the breakage of the elements with consequent stoppage of the production process pending the replacement of the element. This situation entails a large loss of turnover on the part of the company. For this reason, it is crucial to be able to predict failures in advance to try to replace the element before its wear can cause a reduction in machine performance. Several systems have recently been developed for the preventive faults detection that use a combination of low-cost sensors and algorithms based on machine learning. In this work the different methodologies for the identification of the most common mechanical failures are examined and the most widely applied algorithms based on machine learning are analyzed: Support Vector Machine (SVM) solutions, Artificial Neural Network (ANN) algorithms, Convolutional Neural Network (CNN) model, Recurrent Neural Network (RNN) applications, and Deep Generative Systems. These topics have been described in detail and the works most appreciated by the scientific community have been reviewed to highlight the strengths in identifying faults and to outline the directions for future challenges.
... They are derived from automatic control, modeling and identification theories, and computational intelligence techniques. Descriptions of these can be found primarily in the monographs by Gertler (1998), Chen and Patton (2012), Blanke et al. (2015), Ding (2008), Isermann (2006), Witczak (2007), Korbicz and Kościelny (2010) or Bartyś (2014). * Corresponding author The practical utility of diagnostic methods is often limited to a specific class of objects or systems. ...
Article
Full-text available
The paper proposes an original, comprehensive, and methodically consistent graph theory-based approach to the description of the diagnosed process and the diagnosing system. The main baseline of the presented approach is the dichotomous approach to diagnosing. It involves a separate description of both the process and the diagnostic system. This approach reflects the practice of designing implementable diagnostic systems. Thus, it can be seen as a proposal of a new, alternative, and, at the same time, flexible design procedure with great potential for applications. The primary motivation behind it was an attempt to circumvent the numerous limitations of well-known and well-established diagnosis approaches proposed by the communities working on fault detection and isolation (FDI) and artificial intelligence theories for diagnosis (DX). Accordingly, the paper identifies and provides an extensive discussion and a critical analysis of the existing limitations. Numerous examples and references to practical applications of the approach are indicated. Keywords: graph of the process, the graph of the diagnostic system, fault detection, and isolation, qualitative models, limitations of diagnostic approaches.
... Many signals obtained from the PEM system processes show oscillations that are due either to harmonic or to stochastic nature, or both. If changes in these signals are related to faults in the process, signal processing approaches can be applied for fault diagnosis [123]. When performing a signal processing-based diagnosis method, there are two things needed to be considered: determining which signals to be applied for monitoring and choosing an efficient signal analysis approach for interpreting [124]. ...
Article
Full-text available
One of the green hydrogen projects is Zero Emission Hydrogen Turbine Center (ZEHTC), in which solar panels, PEM electrolyzer, and diaphragm compressor are used to generate power, and produce hydrogen, and store hydrogen at high pressure, respectively. Faults in any components of photovoltaic (PV) systems, PEM electrolyzers, and diaphragm compressors can seriously affect the efficiency, energy yield as well as security, and reliability of the entire system, if not detected and corrected quickly. In this paper, the types and causes of PV systems, PEM electrolyzer, and diaphragm compressors failures are presented, then different methods proposed in the literature for fault detection and diagnosis (FDD) of systems are reviewed and discussed. Special attention is paid to methods that can accurately detect, localize and classify possible faults occurring in PV arrays. The advantages and limits of FDD methods in terms of feasibility, complexity, cost-effectiveness, and generalization capability for large-scale integration are highlighted. Based on the reviewed papers, challenges and recommendations for future research direction are also provided. In this work different model-based approaches are investigated as well as their validation and applications. An overview of different methodologies available in the literature is proposed, which is oriented to help in developing a suitable diagnostic tool for PEM electrolyzer monitoring and fault detection and isolation (FDI). Model-based methods provide fault detection and identification, are easy to implement, and could be conducted during system operation.
... To improve the functioning of the system, fault diagnosis is performed to monitor the location, and identify the faults within the system (Wakiru et al. 2019). The goal for the early fault diagnosis is to get sufficient time for counter measures which include planning of maintenance actions, such as repair, replace, postponing of operations, etc. (Isermann 2005). ...
Article
Full-text available
Asset Management of a complex technical system-of-systems needs cross-organizational operation and maintenance, asset data management and context-aware analytics. Emerging technologies such as AI and digitalisation can facilitate the augmentation of asset management (AAM), by providing data-driven and model-driven approaches to analytics, i.e., now-casting and forecasting. However, implementing context-aware now-casting and forecasting analytics in an operational environment with varying contexts such as for fleets and distributed infrastructure is challenging. The number of algorithms in such an implementation can be vast due to the large number of assets and operational contexts for the fleet. To reduce the complexity of the analytics, it is required to optimize the number of algorithms. This can be done by optimizing the number of operational contexts through a generalization and specialization approach based on both fleet behaviour and individual behaviour for improved analytics. This paper proposes a framework for context-aware now-casting and forecasting analytics for AAM based on a top-down, i.e., Fleet2Individual and bottom-up, i.e., Individual2Fleet approach. The proposed framework has been described and verified by applying it to the context of railway rolling stock in Sweden. The benefits of the proposed framework is to provide industries with a tool that can be used to simplify the implementation of AI and digital technologies in now-casting and forecasting.
... Actuator faults can deteriorate the performance of the system as a large control signal is always required to mitigate the impacts of faults. In general, electrical devices are prone to various faults caused by changes in environmental conditions such as humidity and temperature, faults caused by EMC problems, contact problems, and packaging failures [21]. In addition, semiconductors are vulnerable to different faults like metallization failures, and electrical overstress [22]. ...
Article
This paper presents a distributed fault-tolerant finite-time control algorithm for the secondary voltage and frequency restoration of islanded inverter-based Alternating Current (AC) Microgrids (MGs) considering input saturation and faults. Most existing distributed methods commonly design the secondary control layer based on ideal conditions of the control input channels of the MG without any faults and disturbances. At the same time, MGs are exposed to actuator faults that can significantly impact the control of MGs and lead the MG to unstable situations. One of the other typical practical constraints in multi-agent systems such as MGs is saturation in some parts. The other novel idea is that a consensus-based scheme synchronizes the islanded MG’s voltage and frequency to their nominal values for all DGs within finite time, irrespective of saturation and multiple faults, including partial loss of effectiveness and stuck faults simultaneously. Finally, the performance of the proposed control schemes is verified by performing an offline digital time-domain simulation on a test MG system through a couple of scenarios in the MATLAB/Simulink software environment. The effectiveness and accuracy of the proposed control schemes for islanded AC MGs are compared to previous studies, illustrating the privilege.
... Many solutions have appeared during the 1980s: parity space and observer-based approaches, eigenvalue assignment or parametric based methods. See for example (Isermann 2006), (Chen and Patton 1999), (Zolghadri 2000). In the 1990s, a great number of publications dealt with specific aspects such as robustness and sensitivity, diagnosisoriented modelling or robust isolation. ...
Article
What is the Achilles’ heel of academic Fault Detection and Isolation (FDI) methods when it comes to application to real aircraft systems? This paper discusses some major and decisive issues that stand in the way of their transition from lab developments and simulations to real life applications in aeronautics. Often underestimated by academics, these issues determine the survivability of a new design for final V&V (Verification & Validation) activities. The paper recalls some practical items that should be considered at the design stage to help reach high Technological Readiness Level (TRL) scales for a given FDI algorithm. The paper will also take a look in the future and the way forward to anticipate future needs.
... For survey papers on FTC, see [6,7,8]. Generally, FTC methods are classified into two types: active FTC (AFTC) and passive FTC (PFTC) [6]. ...
Article
In this paper, an active fault tolerant control system is proposed and applied to a new intensified heat exchanger/reactor system. This method consists of an adaptive observer-based fault detection, isolation, and identification scheme and a control law redesign method based on the backstepping approach. The objective of the application of the fault tolerant control system is to ensure the safety and productivity of this intensified heat exchanger/reactor even in the presence of a fault. Both parameter and sensor faults are considered. The effectiveness of the fault tolerant control method based on adaptive observers is validated by simulations on the heat exchanger/reactor system.
... The most notable are control methods: compression, air leaks from ICE cylinders, the compressor-vacuum method, the monitoring of thermal gaps using a probe, the endoscopic method, and the control of dynamic pressure in ICE combustion chambers, vibration, and noise [29][30][31][32][33][34][35][36][37]. However, the low reliability of these methods and their significant labor intensity requires the development of new methods or the improvement of existing methods for diagnosing VT. ...
Article
Full-text available
Type Diagnosing and detecting malfunctions in internal combustion engines (ICE) is not an easy task due to their complex design. Timely and high-quality ICE monitoring allows performance to be maintained and prevents breakdowns. Vibration and acoustic analysis is a powerful and informative tool for detecting faults even at an early stage. This article considers a method for determining the main malfunctions of the valvetrain (VT) (tightness of the "valve-seat" interface, thermal gap in the valve drive, valve opening and closing phases) by measuring and analyzing vibroacoustic pulses caused by the operation of individual engine elements. The maximum amplitude and the moment of vibration impulses are used as signal parameters. For the reference signal of the piston, the top dead center (TDC) of the cylinder under study, a vibration pulse from the impact of the piston on an elastic tip placed in the combustion chamber is taken. This technique makes it possible to exclude the external influences and inaccuracies associated with a change in the geometry ICE elements.
... La STFT se define como [22]: ...
Conference Paper
Reliable detection of broken rotor bars through the analysis of sidebands around the fundamental component of the stator current at steady state is difficult when the motor operates at no-load or low load. In this work it is proposed to monitor the time evolution of the high frequency sequence components associated with broken rotor bars during the starting transient. Through simulations, the behavior of these components is analyzed for balanced and unbalanced power supply conditions, with and without harmonic distortion. The strategy proved to be independent of these conditions.
... Functional SVMs [16] has been used to enhance the accuracy of fault feature classification. There are different types of diagnosis methods like hardware-redundancy methods, knowledge based and signal processing methods or model-based approaches [17][18][19] are well established. Now there are various recent expert techniques such as Swarm Intelligence [20], Neural Networks [21], and hybrid neuro-fuzzy approaches [22] [23] which are more popular for their nonlinearity and better approximation capability but still we observe that most of the fault diagnostic techniques have been developed and implemented for induction motors. ...
Preprint
Full-text available
A hybrid approach based on multirate signal processing and sensory data fusion is proposed for the condition monitoring and identification of fault signal signatures used in the Flight ECS (Engine Control System) unit. Though motor current signature analysis (MCSA) is widely used for fault detection now-a-days, the proposed hybrid method qualifies as one of the most powerful online/offline techniques for diagnosing the process faults. Existing approaches have some drawbacks that can degrade the performance and accuracy of a process-diagnosis system. In particular, it is very difficult to detect random stochastic noise due to the nonlinear behavior of valve controller. Using only Short Time Fourier Transform (STFT), frequency leakage and the small amplitude of the current components related to the fault can be observed, but the fault due to the controller behavior cannot be observed. Therefore, a framework of advanced multirate signal and data-processing aided with sensor fusion algorithms is proposed in this article and satisfactory results are obtained. For implementing the system, a DSP-based BLDC motor controller with three-phase inverter module (TMS 320F2812) is used and the performance of the proposed method is validated on real time data.
... Therefore, a novel method based on agents is proposed to reduce the possibility of false alarms. As shown in Figure 4, the agents interact with each other, where the models and interconnections of B, F, and G enable the agents to analyze the estimation error of normal behavior [41,42]. Figure 4 shows the agent with model MG that performs alarm rejection based on the analysis of the outputs of the predecessor agents with models MB or MF. ...
Article
Full-text available
This study proposes a method for improving the capability of a data-driven multi-agent system (MAS) to perform condition monitoring and fault detection in industrial processes. To mitigate the false fault-detection alarms, a co-operation strategy among software agents is proposed because it performs better than the individual agents. Few steps transform this method into a valuable procedure for improving diagnostic certainty. First, a failure mode and effects analysis are performed to select physical monitoring signals of the industrial process that allow agents to collaborate via shared signals. Next, several artificial neural network (ANN) models are generated based on the normal behavior operation conditions of various industrial subsystems equipped with monitoring sensors. Thereafter, the agents use the ANN-based expected behavior models to prevent false alarms by continuously monitoring the measurement samples of physical signals that deviate from normal behavior. Finally, this method is applied to a wind turbine. The system and tests use actual data from a wind farm in Spain. The results show that the collaboration among agents facilitates the effective detection of faults and can significantly reduce false alarms, indicating a notable advancement in the industrial maintenance and monitoring strategy.
... In the early stage of HVAC system fault diagnostics, the observable fault symptoms on various measurements were widely used by operators to judge a system's operation, as well as determine system operational abnormalities. This heuristic process generated qualitative measures such as some linguistic expressions like 'small', 'normal' and 'large' for empirical fault diagnostics (Isermann, 2005), and further evolved to some fault diagnostics approaches such as rule-based fault diagnostics (Schein and Bushby, 2006) and the expert system (Kaldorf and Gruber, 2002). Thanks to the rapid development of sensor techniques and achievements of data-driven techniques, an increasing volume of sensors has been deployed in the HVAC system, and more data can be easily collected for developing advanced data-driven fault diagnostics solutions. ...
Conference Paper
Full-text available
Fan coil units (FCUs) are decentralized airconditioning devices to locally condition zone air. In the U.S. and Europe, FCUs are widely deployed in diverse types of buildings such as offices, hotels, schools, and residential apartments because of their low cost and easy installation. The abnormal operation of FCUs due to faults or malfunctioning components may cause significant energy waste and degrading thermal comforts. However, faults occurring in FCUs have been seldom investigated. A systematic analysis of fault impacts of FCUs would enable a better understanding of fault impacts, an efficient development of fault diagnostics approaches, and an improvement of FCUs monitoring system design. In this paper, we used a FCU simulation model, which was developed in the HVACSIM+ environment from a previous study to evaluate FCU fault impacts. Five common faults with different intensities were simulated within a one-year time window to generate fault inclusive operation data. We employed a bottom-up fault impact analysis framework. Fault effects on multiple measurements were firstly evaluated to obtain fault symptom occurrence probability distributions which quantify measurements' sensitivities. Secondly, fault thermal comfort impact and fan power energy consumption impact were assessed. Lastly, the result from fault thermal comfort impacts and energy penalties was used to rank FCU faults.
... The LDI process is the task of determining if the WDN is working under a leak (i.e., leak detection) and, finding its location once it has been detected (i.e., leak isolation) [24,25]. However, for branched pipeline WDNs, the LDI problem could be a challenging task in case of complex and large scale case studies. ...
Article
Full-text available
The main contribution of this paper is to present a novel solution for the leak diagnosis problem in branched pipeline systems considering the availability of pressure head and flow rate sensors on the upstream (unobstructed) side and the downstream (constricted) side. This approach is based on a bank of Kalman filters as state observers designed on the basis of the classical water hammer equations and a related genetic algorithm (GA) which includes a fitness function based on an integral error that helps obtaining a good estimation despite the presence of noise. For solving the leak diagnosis problem, three stages are considered: (a) the leak detection is performed through a mass balance; (b) the region where the leak is occurring is identified by implementing a reduced bank of Kalman filters which localize the leak by sweeping all regions of the branching pipeline through a GA that reduces the computational effort; (c) the leak position is computed through an algebraic equation derived from the water hammer equations in steady-state. To assess this methodology, experimental results are presented by using a test bed built at the Tuxtla Gutiérrez Institute of Technology, Tecnológico Nacional de México (TecNM). The obtained results are then compared with those obtained using a classic extended Kalman filter which is widely used in solving leak diagnosis problems and it is highlighted that the GA approach outperforms the EKF in two cases whereas the EKF is better in one case.
Chapter
In this work, we make use of the Model-of-Signal technique to perform lubrication monitoring of a large industrial worm gear motor. We assume sensor measurements to be modelled by autoregressive processes and exploit the edge-computing capabilities of programmable logic controllers to perform the Recursive Least Squares algorithm to identify them. Then, we use those models to compute indicators able to diagnose the lubricant level within the gearbox and compare them to statistical indexes, which are traditionally used for monitoring. The aim of this application is to show how to build a condition monitoring infrastructure in an industrial environment able to detect possible occurring faults locally and acquire knowledge about them by exchanging information with external computers, paving the way towards Intelligent Maintenance Systems in Industry 4.0.KeywordsCondition monitoringDiagnosisAutomatic machinesIndustry 4.0
Chapter
The orbital uncertainty propagation problem is treated, where the uncertainty concerning an object position and velocity is propagated over a long time interval because observations are scarce. This paper focuses on the Gaussian mixture description of the uncertainty and proposes a method that adaptively changes the number of mixture components to represent the uncertainty efficiently. The proposed method uses the mean square error-based measure of nonlinearity to generate a decision on whether the mixture components should be split in order to preserve the fidelity of the uncertainty description. Further, the paper analyzes the performance of four local propagators, which propagate individual mixture components in time. The performance analysis is accomplished using a low-earth orbit scenario.KeywordsGaussian mixtureMeasures of nonlinearityUncertainty propagationSpace surveillanceOrbital mechanics
Chapter
A novel approach for the resolution of the linear quadratic regulator (LQR) state-derivative controller is proposed in this paper. To solve the LQR state-derivative and design the controllers, linear matrix inequalities (LMIs) are used, owing to your relatively easy manipulation. In the formulation of the problem, an uncertain system is considered and described as a convex combination of polytope vertices. In this way, a set of weighting matrices for each polytope vertex is considered and therewith it is possible to improve the behaviour of the uncertain system and/or the control signal, as can be seen in simulation and practical implementation. However, it is possible to properly choose this set of weighting matrices and may prioritize (or not) performance during the fault or in the absence of fault. The validation of the proposed theorem is illustrated through a practical implementation in an active suspension system subject to failure.KeywordsLinear quadratic regulator (LQR)Linear matrix inequalities (LMIs)State-derivative feedbackRobust controlFault-tolerant control
Article
The natural variation of the data signatures of airborne aerosols from calibrated cigarette particles were quantified using enhanced Bonferroni methods. The significance of the problem of improving analytical methods for understanding the natural variation of airborne particles cannot be understated given the positive impact for mitigating harmful airborne particles. The data presented in this paper were obtained using experiments to examine the effect of a carbon-brush-based bipolar ionization on filtration efficiency of a MERV 10 filter in a recirculating HVAC system. Ionization technology is deployed throughout the world as a multilayered approach with filtration for improving indoor air quality. Despite its wide use, ionization is still considered an emerging technology due to a dearth of peer-reviewed literature. Poorly designed test protocols and a lack of robust statistical methods for analyzing experimental data are the primary reasons. Presented herein is a statistical groundwork for analyzing ionization-efficacy data from highly controlled and properly designed particulate-matter test trials. Results are presented for three experimental groups where bipolar ionization was used to study the behaviors of data signatures from cigarette-smoke aerosol particles ranging in size from 49.6 to 201.7 nm. Statistical control bands of the data from these experimental groups revealed that bipolar ionization had significant changes to the pdfs and reductions in the natural variation of the data signatures for the particle count (number of particles) across all particle sizes. Statistical control bands may provide enhanced quantitative knowledge of variation and provide expanded inference that goes beyond examination of percentiles only. The implications from this research are profound, as it lays the groundwork for the development of highly effective ionization-filtration layered strategies to mitigate the hazards of airborne particulates and is the first step towards creating robust efficacy test standards for the industry.
Chapter
Crew resource management (CRM) is the product of a paradigm shift in safety thinking from ‘finding the problem' to ‘finding the solution'. Until the ‘crash of the century' took place in Tenerife in 1977, the first officer was only to be seen and not heard. He was ‘a good for nothing' sandbag sitting in the right seat. But all changed in 1977-1978 with the introduction of CRM, initially cockpit resource management and now crew resource management. It was so elaborate a system created out of necessity by aviation that no matter which high-reliability organization (HRO) was and is present, they took it up as the most efficient and effective method to reduce human fallibility. Starting from the civil nuclear technology sector to medical science to firefighting, all have adhered to CRM principles. The latest innovation which comes across from the experts is threat and error management (TEM), which is coincidentally the revision or version six of CRM. The aim of the present effort is to relate how CRM has come of age, the purpose behind it, and a deeper view of its successful cross-functioning into various vocations and industries.
Conference Paper
The paper considers the increase in the economic and operational indicators of gas-cylinder vehicles by improving the technology of diagnosis. A model of the process of diagnosing a gas-fuel system of automobiles is proposed, based on the optimization of the matrix of possible states of the object by the method of calculating the decrease in entropy. The solution of the problem of self-optimization of the developed model by automatic recalculation of the prior probabilities of the occurrence of states is given. The effectiveness of the proposed diagnostic technique is shown in the framework of a production experiment: the median labor intensity of troubleshooting in the gas-fuel system has decreased from 33 to 21 minutes. The results can be used by specialists in the technical operation of road transport in the formation of an applied system for technical diagnostics of CNG/LPG vehicles.
Chapter
This chapter introduces Kalman filter-based fault diagnosis methods for discrete-time linear stochastic systems. This chapter first considers the Kalman filter-based fault detection, including the residual generation based on the Kalman filter and the residual evaluation method based on statistic test. Second, this chapter generalizes the methodology of DOS to Kalman filter-based fault isolation. Finally, a fault estimation method based on augmented state Kalman filter is presented in this chapter.
Article
Full-text available
The natural variation of the data signatures of airborne aerosols from calibrated cigarette particles were quantified using enhanced Bonferroni methods. The significance of the problem of improv-ing analytical methods for understanding the natural variation of airborne particles cannot be understated given the positive impact for mitigating harmful airborne particles. The data pre-sented in this paper were obtained using experiments to examine the effect of a car-bon-brush-based bipolar ionization on filtration efficiency of a MERV 10 filter in a recirculating HVAC system. Ionization technology is deployed throughout the world as a multilayered ap-proach with filtration for improving indoor air quality. Despite its wide use, ionization is still con-sidered an emerging technology due to a dearth of peer-reviewed literature. Poorly designed test protocols and a lack of robust statistical methods for analyzing experimental data are the primary reasons. Presented herein is a statistical groundwork for analyzing ionization-efficacy data from highly controlled and properly designed particulate-matter test trials. Results are presented for three experimental groups where bipolar ionization was used to study the behaviors of data sig-natures from cigarette-smoke aerosol particles ranging in size from 49.6 to 201.7 nm. Statistical control bands of the data from these experimental groups revealed that bipolar ionization had significant changes to the pdfs and reductions in the natural variation of the data signatures for the particle count (number of particles) across all particle sizes. Statistical control bands may pro-vide enhanced quantitative knowledge of variation and provide expanded inference that goes beyond examination of percentiles only. The implications from this research are profound, as it lays the groundwork for the development of highly effective ionization-filtration layered strategies to mitigate the hazards of airborne particulates and is the first step towards creating robust effi-cacy test standards for the industry.
Chapter
This chapter presents two residual evaluation methods for observer-based fault detection. For the LTI systems with norm-bounded uncertainties, a residual method based on peak-to-peak analysis is presented. For the systems with interval-bounded uncertainties, a residual evaluation method based on interval analysis is proposed. The proposed methods can obtain adaptive thresholds for residual evaluation.
Chapter
General principles for safety and security apply to the whole system; i.e., they cover many quality properties of the CPSs. Therefore, they are presented in this chapter.
Article
Full-text available
In this paper, a new approach to fault detection and isolation for multiple simultaneous faults for a quad-rotor system is proposed. This method is based on parity space and high accuracy diagnosis of multiple faults can be achieved with the proposed method. Another advantage of this scheme is that in addition of its fault localization capability, types of faults could be identified as well. In this research only multiplicative actuator faults is considered and step, impulse and sinusoidal fault are studied. To achieve high accuracy, 10 residuals are generated and multiple combination of these residuals are studied for diagnosis purposes. As the parity space approach is used only for linear systems, linear dynamics of quad-rotor in hovering mode is selected for simulations. Simulation results show the effectiveness and high accuracy of the proposed method in both fault detection and diagnosis and type identification of actuator faults.
Article
Full-text available
The continual expansion of the range of applications for unmanned aerial vehicles (UAVs) is resulting in the development of more and more sophisticated systems. The greater the complexity of the UAV, the greater the likelihood that a component will fail. Due to the fact that drones often operate in close proximity to humans, the reliability of flying robots, which directly affects the level of safety, is becoming more important. This review article presents recent research works on fault detection on unmanned flying systems. They include papers published between January 2016 and August 2022. Web of Science and Google Scholar databases were used to search for articles. Terminology related to fault detection of unmanned aerial vehicles was used as keywords. The articles were analyzed, each paper was briefly summarized and the most important details concerning each of the described articles were summarized in the table.
Chapter
This chapter introduces several observer-based fault estimation methods, including adaptive observer-based methods and augmented state observer-based methods. First, two adaptive observer-based fault estimation methods are given for continuous-time LTI systems with actuator faults. Second, the fault estimation methods based on augmented state observer is studied for both continuous-time systems and discrete-time systems. To improve the fault estimation performance, robust design of the augmented observer-based fault estimator is also studied in this chapter.
Preprint
p>Over the last decade, transfer learning has attracted a great deal of attention as a new learning paradigm, based on which fault diagnosis (FD) approaches have been intensively developed to improve the safety and reliability of modern automation systems. Because of inevitable factors such as the varying work environment, performance degradation of components, and heterogeneity among similar automation systems, the FD method having long-term applicabilities becomes attractive. Motivated by these facts, transfer learning has been an indispensable tool that endows the FD methods with self-learning and adaptive abilities. On the presentation of basic knowledge in this field, a comprehensive review of transfer learning-motivated FD methods, whose two subclasses are developed based on knowledge calibration and knowledge compromise, is carried out in this survey paper. Finally, some open problems, potential research directions, and conclusions are highlighted. Different from the existing reviews of transfer learning, this survey focuses on how to utilize knowledge specifically for the FD tasks, based on which three principles and a new classification strategy of transfer learning-motivated FD techniques are also presented. We hope this work will constitute a timely contribution to transfer learning-motivated techniques regarding the FD topic.</p
Article
The frequency of cyberattacks against process control systems has increased in recent years. This work considers multiplicative false‐data injection attacks involving the multiplication of the data communicated over the sensor‐controller communication link by a factor. An active detection method utilizing switching between two control modes is developed to balance the trade‐off between closed‐loop performance and attack detectability. Under the first mode, the control parameters are selected using traditional control design criteria. Under the second mode, the control parameters are selected to enhance the attack detection capability. A switching condition is imposed to prevent false alarms that could be triggered by the transient response induced by control mode switching. This condition is incorporated into the active detection method to minimize false alarms. The active detection method is applied to illustrative process examples to demonstrate its ability to detect attacks and minimize false alarms.
Conference Paper
This work considers an important problem in the field of industrial engineering, i.e., the problem of fault detection and process diagnosis, some time referred as the CDD problem. The main idea is to use models based on information and frequency representations. It is considered that the fault in the considered process generates mechanical vibrations, which are measured and processed by advanced techniques of signal processing, e.g., statistical processing, various entropies, or divergences. The data set is processed by observation windows of variable length, depending on the stationarity properties of the vibrations. Process diagnosis is implemented with the help of the pattern classification paradigm, each pattern being designed by specific features in line with the used models and parameters. Two types of signals are considered for tests, i.e., real physical and artificial generated signals. The computers-based experiments show promising results for the used method, and the limits of various signal processing methods and information measures as well. The method based on data processing and feature extraction from frequency domain could be applied also to other data types and processes, where information of frequency domain is relevant.
Conference Paper
Supervision of mechanical ventilation is currently still performed by clinical staff. With the increasing level of automation in the intensive care unit, automatic supervision is becoming necessary. We present a fuzzy-based expert supervision system applicable to automatic feedback control of oxygenation. An adaptive fuzzy limit checking and trend detection algorithm was implemented. A knowledge-based fuzzy logic system combines these outputs into a final score, which subsequently triggers alarms if a critical event is registered. The system was evaluated against annotated experimental data. An accuracy of 83 percent and a precision of 95 percent were achieved. The automatic detection of critical events during feedback control of oxygenation provides an additional layer of safety and assists in alerting clinicians in the case of abnormal behavior of the system. Clinical relevance - Automatic supervision is a necessary feature of physiological feedback systems to make them safer and more reliable in the future.
Article
Fault Detection and Diagnosis (FDD) is a Process System Engineering (PSE) area of great importance, especially with increased process automation. It is one of the chemical engineering fields considered promising to Artificial Intelligence (AI) application. FDD systems can be useful to supervise Sour Water Treatment Units (SWTU) behavior, as they are chemical processes that present operational difficulties when disturbances occur. SWTU remove contaminants from sour water (SW) streams generated through petroleum processing, consisting mainly of small amounts of H2S and NH3. They are considered one of the primary aqueous wastes of refineries and cannot be disposed of due to environmental regulations. However, no previous studies focused on the development of FDD systems for SWTU exist and works on its dynamics are scarce. Hence, the present work proposes to study the dynamic simulated behavior of an SWTU and develop an FDD system applying AI techniques with hyperparameters optimization. The simulation was performed in Aspen Plus Dynamics® and ran to create normal operation and six relevant faults, including occurrences in the process (e.g., inundation and fouling) and sensors. FDD was performed through data classification, and results were evaluated mainly by accuracy and confusion matrices. Even after variable reduction, FDD was satisfactory with over 87.50% accuracy in all AI techniques. RF and SVM with linear and Gaussian kernels presented the best results, with over 93% of accuracy in training and testing, and had the shortest computing times. The second column’s sump level proved to be the most relevant variable for fault identification.
ResearchGate has not been able to resolve any references for this publication.