Chapter

Identification of Lack of Knowledge Using Analytical Redundancy Applied to Structural Dynamic Systems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Reliability of sensor information in today’s highly automated systems is crucial. Neglected and not quantifiable uncertainties lead to lack of knowledge which results in erroneous interpretation of sensor data. Physical redundancy is an often-used approach to reduce the impact of lack of knowledge but in many cases is infeasible and gives no absolute certainty about which sensors and models to trust. However, structural models can link spatially distributed sensors to create analytical redundancy. By using existing sensor data and models, analytical redundancy comes with the benefits of unchanged structural behavior and cost efficiency. The detection of conflicting data using analytical redundancy reveals lack of knowledge, e.g. in sensors or models, and supports the inference from conflict to cause. We present an approach to enforce analytical redundancy by using an information model of the technical system formalizing sensors, physical models and the corresponding uncertainty in a unified framework. This allows for continuous validation of models and the verification of sensor data. This approach is applied to a structural dynamic system with various sensors based on an aircraft landing gear system.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Even if the uncertainty of all sources is quantified and taken into account, contradictory measurements in which the confidence intervals of two or more data sources do not overlap lead to so-called data-induced conflicts, which cannot be resolved with classical fusion techniques (Hartig et al., 2020). In the past, unsolved and ignored data-induced conflicts led to several severe incidents such as the crashes of the ExoMars probe Schiaparelli on 14 March 2017 and the Boeing 737 Max on 29 October 2018. ...
... Redundancy increases the availability of information but can also lead to contradictory measurements (Khaleghi et al., 2013). These so called data-induced conflicts are the result of redundant data sources with non-overlapping confidence intervals (Hartig et al., 2020). These conflicts can be used to enhance information about the system, but remain unnoticed if uncertainty is not sufficiently taken into account or if too few or uncertain data sources are considered. ...
... These conflicts can be used to enhance information about the system, but remain unnoticed if uncertainty is not sufficiently taken into account or if too few or uncertain data sources are considered. Figure 1 illustrates three redundant data sources with their respective uncertainty and a data-induced conflict between source A and the consensus of source B and C. In this work we define consensus as a confirmatory statement from redundant data sources with overlapping confidence intervals (Hartig et al., 2020). ...
Article
Decision-making highly relies on the accuracy and veracity of data. Therefore, redundant data acquisition and fusion has established but lack the ability to handle conflicting data correctly. Especially digital twins, which complement physical products with mathematical models, and contribute to redundancy. Uncertainty propagates through the digital twin and provides the opportunity to check data for conflicts, to identify affected subsystems and to infer a possible cause. This work presents an approach that combines a digital twin with the ability of uncertainty propagation, conflict detection, processing and visualisation techniques for mastering data-induced conflicts. The capability of this method to identify and isolate faults was examined on a technical system with a multitude of sensors.
... Even if the uncertainty of all sources is quantified and taken into account, contradictory measurements in which the confidence intervals of two or more data sources do not overlap lead to so-called data-induced conflicts, which cannot be resolved with classical fusion techniques (Hartig et al., 2020). In the past, unsolved and ignored data-induced conflicts led to several severe incidents such as the crashes of the ExoMars probe Schiaparelli on 14 March 2017 and the Boeing 737 Max on 29 October 2018. ...
... Redundancy increases the availability of information but can also lead to contradictory measurements (Khaleghi et al., 2013). These so called data-induced conflicts are the result of redundant data sources with non-overlapping confidence intervals (Hartig et al., 2020). These conflicts can be used to enhance information about the system, but remain unnoticed if uncertainty is not sufficiently taken into account or if too few or uncertain data sources are considered. ...
... These conflicts can be used to enhance information about the system, but remain unnoticed if uncertainty is not sufficiently taken into account or if too few or uncertain data sources are considered. Figure 1 illustrates three redundant data sources with their respective uncertainty and a data-induced conflict between source A and the consensus of source B and C. In this work we define consensus as a confirmatory statement from redundant data sources with overlapping confidence intervals (Hartig et al., 2020). ...
... This section introduces the concept of data-induced conflicts, discusses the advantages and challenges, and presents a method for dealing with data-induced conflicts in technical systems. The method is a slightly extended version of [70]. ...
Chapter
Full-text available
This chapter describes the various approaches to analyse, quantify and evaluate uncertainty along the phases of the product life cycle. It is based on the previous chapters that introduce a consistent classification of uncertainty and a holistic approach to master the uncertainty of technical systems in mechanical engineering. Here, the following topics are presented: the identification of uncertainty by modelling technical processes, the detection and handling of data-induced conflicts, the analysis, quantification and evaluation of model uncertainty as well as the representation and visualisation of uncertainty. The different approaches are discussed and demonstrated on exemplary technical systems.
Article
Full-text available
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply “autodiff”, is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational fluid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other’s results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situation slowly changing with its ongoing adoption under the names “dynamic computational graphs” and “differentiable programming”. We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main implementation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms “autodiff”, “automatic differentiation”, and “symbolic differentiation” as these are encountered more and more in machine learning settings.
Article
Full-text available
The integration of data and knowledge from several sources is known as data fusion. This paper summarizes the state of the data fusion field and describes the most relevant studies. We first enumerate and explain different classification schemes for data fusion. Then, the most common algorithms are reviewed. These methods and algorithms are presented using three different categories: (i) data association, (ii) state estimation, and (iii) decision fusion.
Conference Paper
Full-text available
This paper presents a sensor fusion strategy based on Bayesian method that can identify the inconsistency in sensor data so that spurious data can be eliminated from the sensor fusion process. The proposed method adds a term to the commonly used Bayesian technique that represents the probabilistic estimate corresponding to the event that the data is not spurious conditioned upon the data and the true state. This term has the effect of increasing the variance of the posterior distribution when data from one of the sensors is inconsistent with respect to the other. The proposed strategy was verified with the help of extensive simulations. The simulations showed that the proposed method was able to identify inconsistency in sensor data and also confirmed that the identification of inconsistency led to a better estimate of desired state variable
Book
Modern mineral processing plants are required to be safe and profitable and to minimize their environmental impact. The consequent quest for higher operational standards at reduced cost is leading the industry towards automation technologies as capital-effective means of attaining these objectives. Advanced Control and Supervision of Mineral Processing Plants describes the use of dynamic models of major items of mineral processing equipment in the design of control, data reconciliation and soft-sensing schemes; through examples, it illustrates tools integrating simulation and control system design for comminuting circuits and flotation columns. Full coverage is given to the design of soft sensors based on either single-point measurements or more complex measurements like images. The chief issues concerning steady-state and dynamic data reconciliation and their employment in the creation of instrument architecture and fault diagnosis are surveyed. In consideration of the widespread use of distributed control and information management systems in mineral processing, the book describes the current platforms and toolkits available for implementing such advanced systems. Applications of the techniques described in real mineral processing plants are used to highlight their benefits; information for all of the examples, together with supporting MATLAB® code can be found at www.springer.com/978-1-84996-105-9. The provision of valuable tools and information on the use of modern software platforms and methods will benefit engineers working in the mineral processing industries, and control engineers and academics interested in the real industrial practicalities of new control ideas. The book will also be of interest to graduate students in chemical, metallurgical and electronic engineering looking for applications of control technology in the treatment of raw materials.
Article
It is highly recommended that future mechatronic systems will increasingly require self diagnosis and fault detection capabilities when designed as stand alone system the smart aircraft actuator. In this paper we deal with some signal based sensor fault detection schemes as well as model based FDI-schemes like the EKF used for system critical parameter estimation for actuator diagnose purpose. Concluding, a µ-robust realization for online-FDI will be mentioned as a regard for future development.
Article
We discuss the basic concepts of the Dempster-Shafer approach, basic probability assignments, belief functions, and probability functions. We discuss how to represent various types of knowledge in this framework. We discuss measures of entropy and specificity for belief structures. We discuss the combination and extension of belief structures. We introduce some concerns associated with the Dempster rule of combination inherent in the normalization due to conflict. We introduce two alternative techniques for combining belief structures. The first uses Dempster's rule, while the second is based upon a modification of this rule. We discuss the issue of credibility of a witness.
Article
After a short overview of the historical development of model-based fault detection, some proposals for the terminology in the field of supervision, fault detection and diagnosis are stated, based on the work within the IFAC SAFEPROCESS Technical Committee. Some basic fault-detection and diagnosis methods are briefly considered. Then, an evaluation of publications during the last 5 years shows some trends in the application of model-based fault-detection and diagnosis methods.
Article
The C++ package ADOL-C described in this paper facilitates the evaluation of first and higher derivatives of vector functions that are defined by computer programs written in C or C++. The numerical values of derivative vectors are obtained free of truncation errors at mostly a small multiple of the run time and a fix small multiple random access memory required by the given function evaluation program. Derivative matrices are obtained by columns, by rows or in sparse format. This tutorial describes the source code modification required for the application of ADOL-C, the most frequently used drivers to evaluate derivatives and some recent developments. @InProceedings{walther:DSP:2009:2084, author = {Andrea Walther}, title = {Getting Started with ADOL-C}, booktitle = {Combinatorial Scientific Computing}, year = {2009}, editor = {Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo}, number = {09061}, series = {Dagstuhl Seminar Proceedings}, ISSN = {1862-4405}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany}, address = {Dagstuhl, Germany}, URL = {http://drops.dagstuhl.de/opus/volltexte/2009/2084}, annote = {Keywords: ADOL-C, algorithmic differentiation of C/C++ programs} }
Conference Paper
Agreement problems involve a system of processes, some of which may be faulty. A fundamental problem of fault-tolerant distributed computing is for the reliable processes to reach a consensus. We survey the considerable literature on this problem that has developed over the past few years and give an informal overview of the major theoretical results in the area. 1 Agreement Problems To achieve reliability in distributed systems, protocols are needed which enable the system as a whole to continue to function despite the failure of a limited number of components. These protocols, as well as many other distributed computing problems, requires cooperation among the processes. Fundamental to such cooperation is the problem of agreeing on a piece of data upon which the computation depends. For example, the data managers in a distributed database system need to agree on whether to commit or abort a given transaction [20, 26]. In a replicated file system, the nodes might need to agree o...
Unsicherheitsklassifizierung anhand einer Unsicherheitskarte (Internal Report, Chair of Fluid Systems)
  • P F Pelz
  • P Hedrich
Pelz, P. F., Hedrich, P.: Unsicherheitsklassifizierung anhand einer Unsicherheitskarte (Internal Report, Chair of Fluid Systems). Darmstadt (2015)
Getting started with ADOL-C. Version: 2012
  • A Walther
  • A Griewank