Content uploaded by Daniele Bigoni

Author content

All content in this area was uploaded by Daniele Bigoni on Mar 23, 2015

Content may be subject to copyright.

A preview of the PDF is not available

The systematic quantification of the uncertainties affecting dynamical systems and the characterization of the uncertainty of their outcomes is critical for engineering design and analysis, where risks must be reduced as much as possible. Uncertainties stem naturally from our limitations in measurements, predictions and manufacturing, and we can say that any dynamical system used in engineering is subject to some of these uncertainties.
The first part of this work presents an overview of the mathematical framework used in Uncertainty Quantification (UQ) analysis and introduces the spectral tensor-train (STT) decomposition, a novel high-order method for the effective propagation of uncertainties which aims at providing an exponential convergence rate while tackling the curse of dimensionality. The curse of dimensionality is a problem that afflicts many methods based on meta-models, for which the computational cost increases exponentially with the number of inputs of the approximated function – which we will call dimension in the following.
The STT-decomposition is based on the Polynomial Chaos (PC) approximation and the low-rank decomposition of the function describing the Quantity of Interest of the considered problem. The low-rank decomposition is obtained through the discrete tensor-train decomposition, which is constructed using an optimization algorithm for the selection of the relevant points on which the function needs to be evaluated. The selection of these points is informed by the approximated function and thus it is able to adapt to its features. The number of function evaluations needed for the construction grows only linearly with the dimension and quadratically with the rank.
In this work we will present and use the functional counterpart of this low-rank decomposition and, after proving some auxiliary properties, we will apply PC on it, obtaining the STT-decomposition. This will allow the decoupling of each dimension, leading to a much cheaper construction of the PC surrogate. In the associated paper, the capabilities of the STT-decomposition are checked on commonly used test functions and on an elliptic problem with random inputs.
This work will also present three active research directions aimed at improving the efficiency of the STT-decomposition. In this context, we propose three new strategies for solving the ordering problem suffered by the tensor-train decomposition, for computing better estimates with respect to the norms usually employed in UQ and for the anisotropic adaptivity of the method.
The second part of this work presents engineering applications of the UQ framework. Both the applications are characterized by functions whose evaluation is computationally expensive and thus the UQ analysis of the associated systems will benefit greatly from the application of methods which require few function evaluations.
We first consider the propagation of the uncertainty and the sensitivity analysis of the non-linear dynamics of railway vehicles with suspension components whose characteristics are uncertain. These analysis are carried out using mostly PC methods, and resorting to random sampling methods for comparison and when strictly necessary.
The second application of the UQ framework is on the propagation of the uncertainties entering a fully non-linear and dispersive model of water waves. This computationally challenging task is tackled with the adoption of state-of-the-art software for its numerical solution and of efficient PC methods. The aim of this study is the construction of stochastic benchmarks where to test UQ methodologies before being applied to full-scale problems, where efficient methods are necessary with today’s computational resources.
The outcome of this work was also the creation of several freely available Python modules for Uncertainty Quantification, which are listed and described in the appendix.

Content uploaded by Daniele Bigoni

Author content

All content in this area was uploaded by Daniele Bigoni on Mar 23, 2015

Content may be subject to copyright.

A preview of the PDF is not available

... The underlying principle here lies in an alternative representation of the tensor (x) in Eq. (1) as a low rank tensor by adopting tensor decomposition techniques. These approaches enable low rank approximations (LRA) and have been recently used to develop computationally efficient approximation models in high dimensional problems across multiple fields like quantum computations [28,29], stochastic wave simulation [30], vehicle nonlinear dynamics [30], approximating potential energy surfaces [31,32], fluid flow around blunt bodies [33], approximating elasto-viscoplastic constitutive laws [34] and computational homogenization of materials [35]. The rest of the paper is organized as follows: "Low Rank Approximation" section presents the general principle of LRA and subsequently the rank-one approximation. ...

... The underlying principle here lies in an alternative representation of the tensor (x) in Eq. (1) as a low rank tensor by adopting tensor decomposition techniques. These approaches enable low rank approximations (LRA) and have been recently used to develop computationally efficient approximation models in high dimensional problems across multiple fields like quantum computations [28,29], stochastic wave simulation [30], vehicle nonlinear dynamics [30], approximating potential energy surfaces [31,32], fluid flow around blunt bodies [33], approximating elasto-viscoplastic constitutive laws [34] and computational homogenization of materials [35]. The rest of the paper is organized as follows: "Low Rank Approximation" section presents the general principle of LRA and subsequently the rank-one approximation. ...

... The study presented in [18] showed that for MVEs with PBC, for a given level of spatial discretization the FFT-based solvers are more accurate than FE-based solvers. The calibration dataset has been generated using DAMASK for the elementary macroscale strain states in Eq. (30) with v ij = 2 × 10 −3 ∀ ij . In this work, the elastic properties of Ni were utilized, i.e., ℂ 11 = 246.5 GPa, ℂ 12 = 147.3 ...

This study focuses on investigating alternative computationally efficient techniques for numerically estimating the mesoscale (grain and sub-grain scales) stress and strain in volume elements within an elastic constitutive framework. The underlying principle here lies in developing approximations for the localization tensor that relates the stress and strain fields at the component level to the mesoscale, using low rank approximations. The study proposes two methods to build low rank approximations of localization tensor using different mathematical principles. Numerical results are presented to discuss the relative merits of low rank approximation vis-a-vis full scale simulations across various metals.

... Uncertainty Quantification is a very active research area that studies the impact of uncertainties on the prediction capabilities. Probability and the measure theory provide essential tools for the quantitative mathematical treatment of uncertainty [22]. In predictive science, UQ is defined as the process of identifying and quantifying the uncertainties associated with models, numerical algorithms, experiments, and their predicted outcomes or Quantity of Interest (QoI) [23]. ...

... Moreover, the sensitivity of the parameters plays a vital role in finding the solution of the inverse problem. The sensitivity makes the solution unstable, as a small change in the inputs x can lead to a significant change in the estimated model [22,50]. ...

... x M AP or mode as statistically named is quantified. MAP represents the values of inferred parameters with the highest probabilities of occurrence, 22) and in this case, there is no need to calculate the normalization factor z [57]. ...

This work presents a robust status monitoring approach for detecting damage in cantilever structures based on logistic functions. Also, a stochastic damage identification approach based on changes of eigenfrequencies is proposed. The proposed algorithms are verified using catenary poles of electrified railways track. The proposed damage features overcome the limitation of frequency-based damage identification methods available in the literature, which are valid to detect damage in structures to Level 1 only. Changes in eigenfrequencies of cantilever structures are enough to identify possible local damage at Level 3, i.e., to cover damage detection, localization, and quantification. The proposed algorithms identified the damage with relatively small errors, even at a high noise level.

... Uncertainty Quantification is a very active research area that studies the impact of uncertainties on the prediction capabilities. Probability and the measure theory provide essential tools for the quantitative mathematical treatment of uncertainty [22]. In predictive science, UQ is defined as the process of identifying and quantifying the uncertainties associated with models, numerical algorithms, experiments, and their predicted outcomes or Quantity of Interest (QoI) [23]. ...

... Moreover, the sensitivity of the parameters plays a vital role in finding the solution of the inverse problem. The sensitivity makes the solution unstable, as a small change in the inputs x can lead to a significant change in the estimated model [22,50]. ...

... x M AP or mode as statistically named is quantified. MAP represents the values of inferred parameters with the highest probabilities of occurrence, 22) and in this case, there is no need to calculate the normalization factor z [57]. ...

This work presents a robust status monitoring approach for detecting damage in cantilever structures based on logistic functions. Also, a stochastic damage identification approach based on changes of eigenfrequencies is proposed. The proposed algorithms are verified using catenary poles of electrified railways track. The proposed damage features overcome the limitation of frequency-based damage identification methods available in the literature, which are valid to detect damage in structures to Level 1 only. Changes in eigenfrequencies of cantilever structures are enough to identify possible local damage at Level 3, i.e., to cover damage detection, localization, and quantification. The proposed algorithms identified the damage with relatively small errors, even at a high noise level.

... Uncertainty Quantification is a very active research area that studies the impact of uncertainties on the prediction capabilities. Probability and the measure theory provide essential tools for the quantitative mathematical treatment of uncertainty [22]. In predictive science, UQ is defined as the process of identifying and quantifying the uncertainties associated with models, numerical algorithms, experiments, and their predicted outcomes or Quantity of Interest (QoI) [23]. ...

... Moreover, the sensitivity of the parameters plays a vital role in finding the solution of the inverse problem. The sensitivity makes the solution unstable, as a small change in the inputs x can lead to a significant change in the estimated model [22,50]. ...

... x M AP or mode as statistically named is quantified. MAP represents the values of inferred parameters with the highest probabilities of occurrence, 22) and in this case, there is no need to calculate the normalization factor z [57]. ...

This work presents a robust status monitoring approach for detecting damage in cantilever structures based on logistic functions. Also, a stochastic damage identification approach based on changes of eigenfrequencies is proposed. The proposed algorithms are verified using catenary poles of electrified railways track. The proposed damage features overcome the limitation of frequency-based damage identification methods available in the literature, which are valid to detect damage in structures to Level 1 only. Changes in eigenfrequencies of cantilever structures are enough to identify possible local damage at Level 3, i.e., to cover damage detection, localization, and quantification. The proposed algorithms identified the damage with relatively small errors, even at a high noise level.

... The Sobol decomposition [Sob90], also known as high-dimensional model representation (HDMR) [BEK15] or ANOVA decomposition [ES81] is one of the most important and widely used variancebased GSA approaches. The Sobol decomposition writes any squared-integrable multidimensional function f : R N → R as a sum of subfunctions: ...

... The Sobol indices Sα α α arise from normalizing the Vα α α by the total variance D, i.e., are a mapping S : P({1, ..., N}) → [0, 1]: Sα α α := Vα α α/D with ∑ α α α Sα α α = 1 These indices are an invaluable tool in many GSA settings [STCR04], for example in factor prioritization (reducing uncertainty), factor fixing (identifying non-influential variables), risk minimization, reliability engineering, etc. They are also helpful to select good dimension orderings that lead to more compact surrogate models (example 5.8 by Bigoni [BEK15]; also considered in [DKLM14]). They are hyperedges of a hypergraph, since they encode n-ary relations within subsets of {1, ..., N}. ...

Sobol's method is one of the most powerful and widely used frameworks for global sensitivity analysis, and it maps every possible combination of input variables to an associated Sobol index. However, these indices are often challenging to analyze in depth, due in part to the lack of suitable, flexible enough, and fast‐to‐query data access structures as well as visualization techniques. We propose a visualization tool that leverages tensor decomposition, a compressed data format that can quickly and approximately answer sophisticated queries over exponential‐sized sets of Sobol indices. This way, we are able to capture the complete global sensitivity information of high‐dimensional scalar models. Our application is based on a three‐stage visualization, to which variables to be analyzed can be added or removed interactively. It includes a novel hourglass‐like diagram presenting the relative importance for any single variable or combination of input variables with respect to any composition of the rest of the input variables. We showcase our visualization with a range of example models, whereby we demonstrate the high expressive power and analytical capability made possible with the proposed method.

... Therefore, uncertainty quantification (UQ) plays a significant role in the DD process and the subsequent decision-making phase. Bayesian inference, as a probabilistic framework of UQ, is an efficient approach for solving the ill-posed inverse problems using noisy data and various sources of uncertainties for pure parameter identification problems [37], which makes it a powerful approach to DD, SHM, and SI as well [38]. For instance, Bayesian parameter estimation is used in the domain of structural vibration-based parameter estimation [39,40], in the fields of SI and SHM as a probabilistic uncertainty approach [41][42][43][44], for the model selection of linear and nonlinear dynamical systems [45], and in the domain of DD and model updating [46][47][48]. ...

... The Bayesian approach uses the stochastic model of Equation (3) such that Y = G(X, d) + E for solving inverse problems [38]. In inverse problems, the unknown input parameter X = {X 1 , · · · , X m } T ∈ R m is a random variable with a prior density π X (x) = π 0 (x). ...

This study proposes an efficient Bayesian, frequency-based damage identification approach to identify damages in cantilever structures with an acceptable error rate, even at high noise levels. The catenary poles of electric high-speed train systems were selected as a realistic case study to cover the objectives of this study. Compared to other frequency-based damage detection approaches described in the literature, the proposed approach is efficiently able to detect damages in cantilever structures to higher levels of damage detection, namely identifying both the damage location and severity using a low-cost structural health monitoring (SHM) system with a limited number of sensors; for example, accelerometers. The integration of Bayesian inference, as a stochastic framework, in the proposed approach, makes it possible to utilize the benefit of data fusion in merging the informative data from multiple damage features, which increases the quality and accuracy of the results. The findings provide the decision-maker with the information required to manage the maintenance,
repair, or replacement procedures.

... A standard assumption is that reality will not move significantly away from these predictions; meaning that small perturbations of the system will only cause small perturbations of the predictions. 2 However, only within a probabilistic framework can such assumptions be deemed reasonable. ...

Offshore wind power has been in the spotlight among renewable energy sources. The current trends of increased power ratings and longer blades come together with the aim to reduce energy costs by design optimisation. The standard approach to deal with uncertainties in wind‐turbine design has been by the use of characteristic values and safety factors. This paper focusses on modelling the effect of structural and aerodynamic uncertainties in blades. First, the uncertainties in laminate properties are characterised and propagated in a blade structural model by means of a Monte Carlo simulation. Wind tunnel measurement data are then used to define the variability in lift and drag coefficients for both clean and rough aerofoil behaviour, which is then used to extrapolate rough behaviours throughout the blade. A stochastic spatial interpolation parameter is used to define the evolution of the degradation level. The combined effect and the variance contribution of these two uncertainty sources in turbine loads is finally defined by aeroelastic turbine simulation. This research aims to provide a framework to deal with uncertainties in wind‐turbine blade design and understand their effects in turbine behaviour.

... Sampling based techniques, such as Markov Chain Monte Carlo (MCMC) methods and bootstrapping, have seen use in epidemic modelling as seen in the studies [9,11,5], and by the expert group 1 providing the Covid-19 related modelling for the Danish government. We propose an alternative approach called generalized Polynomial Chaos [4,13,12,2] as an efficient general non-iterative framework to do UQ-analysis using forward modelling where the uncertainties are parameterized; the outcome being a prediction in terms of the solution's expected value and uncertainty in terms of the solution's variance. ...

In the political decision process and control of COVID-19 (and other epidemic diseases), mathematical models play an important role. It is crucial to understand and quantify the uncertainty in models and their predictions in order to take the right decisions and trustfully communicate results and limitations. We propose to do uncertainty quantification in SIR-type models using the efficient framework of generalized Polynomial Chaos. Through two particular case studies based on Danish data for the spread of Covid-19 we demonstrate the applicability of the technique. The test cases are related to peak time estimation and superspeading and illustrate how very few model evaluations can provide insightful statistics.

... The characterisation of the uncertain inputs of an UQ problem is of utmost importance because poor input models will have a large effect on the interpretation of the uncertainty behaviour of the system. Bigoni [5] defines three main approaches to the probability distribution estimation of inputs: • Assumption: probability distributions for the parameters are defined relying on experience and wisdom. ...

The use of epidemic modelling in connection with spread of diseases plays an important role in understanding dynamics and providing forecasts for informed analysis and decision-making. In this regard, it is crucial to quantify the effects of uncertainty in the modelling and in model-based predictions to trustfully communicate results and limitations. We propose to do efficient uncertainty quantification in compartmental epidemic models using the generalized Polynomial Chaos (gPC) framework. This framework uses a suitable polynomial basis that can be tailored to the underlying distribution for the parameter uncertainty to do forward propagation through efficient sampling via a mathematical model to quantify the effect on the output. By evaluating the model in a small number of selected points, gPC provides illuminating statistics and sensitivity analysis at a low computational cost. Through two particular case studies based on Danish data for the spread of Covid-19, we demonstrate the applicability of the technique. The test cases consider epidemic peak time estimation and the dynamics between superspreading and partial lockdown measures. The computational results show the efficiency and feasibility of the uncertainty quantification techniques based on gPC, and highlight the relevance of computational uncertainty quantification in epidemic modelling.

ResearchGate has not been able to resolve any references for this publication.