ThesisPDF Available

Uncertainty Quantification with Applications to Engineering Problems

Authors:

Abstract

The systematic quantification of the uncertainties affecting dynamical systems and the characterization of the uncertainty of their outcomes is critical for engineering design and analysis, where risks must be reduced as much as possible. Uncertainties stem naturally from our limitations in measurements, predictions and manufacturing, and we can say that any dynamical system used in engineering is subject to some of these uncertainties. The first part of this work presents an overview of the mathematical framework used in Uncertainty Quantification (UQ) analysis and introduces the spectral tensor-train (STT) decomposition, a novel high-order method for the effective propagation of uncertainties which aims at providing an exponential convergence rate while tackling the curse of dimensionality. The curse of dimensionality is a problem that afflicts many methods based on meta-models, for which the computational cost increases exponentially with the number of inputs of the approximated function – which we will call dimension in the following. The STT-decomposition is based on the Polynomial Chaos (PC) approximation and the low-rank decomposition of the function describing the Quantity of Interest of the considered problem. The low-rank decomposition is obtained through the discrete tensor-train decomposition, which is constructed using an optimization algorithm for the selection of the relevant points on which the function needs to be evaluated. The selection of these points is informed by the approximated function and thus it is able to adapt to its features. The number of function evaluations needed for the construction grows only linearly with the dimension and quadratically with the rank. In this work we will present and use the functional counterpart of this low-rank decomposition and, after proving some auxiliary properties, we will apply PC on it, obtaining the STT-decomposition. This will allow the decoupling of each dimension, leading to a much cheaper construction of the PC surrogate. In the associated paper, the capabilities of the STT-decomposition are checked on commonly used test functions and on an elliptic problem with random inputs. This work will also present three active research directions aimed at improving the efficiency of the STT-decomposition. In this context, we propose three new strategies for solving the ordering problem suffered by the tensor-train decomposition, for computing better estimates with respect to the norms usually employed in UQ and for the anisotropic adaptivity of the method. The second part of this work presents engineering applications of the UQ framework. Both the applications are characterized by functions whose evaluation is computationally expensive and thus the UQ analysis of the associated systems will benefit greatly from the application of methods which require few function evaluations. We first consider the propagation of the uncertainty and the sensitivity analysis of the non-linear dynamics of railway vehicles with suspension components whose characteristics are uncertain. These analysis are carried out using mostly PC methods, and resorting to random sampling methods for comparison and when strictly necessary. The second application of the UQ framework is on the propagation of the uncertainties entering a fully non-linear and dispersive model of water waves. This computationally challenging task is tackled with the adoption of state-of-the-art software for its numerical solution and of efficient PC methods. The aim of this study is the construction of stochastic benchmarks where to test UQ methodologies before being applied to full-scale problems, where efficient methods are necessary with today’s computational resources. The outcome of this work was also the creation of several freely available Python modules for Uncertainty Quantification, which are listed and described in the appendix.
A preview of the PDF is not available
... The underlying principle here lies in an alternative representation of the tensor (x) in Eq. (1) as a low rank tensor by adopting tensor decomposition techniques. These approaches enable low rank approximations (LRA) and have been recently used to develop computationally efficient approximation models in high dimensional problems across multiple fields like quantum computations [28,29], stochastic wave simulation [30], vehicle nonlinear dynamics [30], approximating potential energy surfaces [31,32], fluid flow around blunt bodies [33], approximating elasto-viscoplastic constitutive laws [34] and computational homogenization of materials [35]. The rest of the paper is organized as follows: "Low Rank Approximation" section presents the general principle of LRA and subsequently the rank-one approximation. ...
... The underlying principle here lies in an alternative representation of the tensor (x) in Eq. (1) as a low rank tensor by adopting tensor decomposition techniques. These approaches enable low rank approximations (LRA) and have been recently used to develop computationally efficient approximation models in high dimensional problems across multiple fields like quantum computations [28,29], stochastic wave simulation [30], vehicle nonlinear dynamics [30], approximating potential energy surfaces [31,32], fluid flow around blunt bodies [33], approximating elasto-viscoplastic constitutive laws [34] and computational homogenization of materials [35]. The rest of the paper is organized as follows: "Low Rank Approximation" section presents the general principle of LRA and subsequently the rank-one approximation. ...
... The study presented in [18] showed that for MVEs with PBC, for a given level of spatial discretization the FFT-based solvers are more accurate than FE-based solvers. The calibration dataset has been generated using DAMASK for the elementary macroscale strain states in Eq. (30) with v ij = 2 × 10 −3 ∀ ij . In this work, the elastic properties of Ni were utilized, i.e., ℂ 11 = 246.5 GPa, ℂ 12 = 147.3 ...
Article
Full-text available
This study focuses on investigating alternative computationally efficient techniques for numerically estimating the mesoscale (grain and sub-grain scales) stress and strain in volume elements within an elastic constitutive framework. The underlying principle here lies in developing approximations for the localization tensor that relates the stress and strain fields at the component level to the mesoscale, using low rank approximations. The study proposes two methods to build low rank approximations of localization tensor using different mathematical principles. Numerical results are presented to discuss the relative merits of low rank approximation vis-a-vis full scale simulations across various metals.
... Uncertainty Quantification is a very active research area that studies the impact of uncertainties on the prediction capabilities. Probability and the measure theory provide essential tools for the quantitative mathematical treatment of uncertainty [22]. In predictive science, UQ is defined as the process of identifying and quantifying the uncertainties associated with models, numerical algorithms, experiments, and their predicted outcomes or Quantity of Interest (QoI) [23]. ...
... Moreover, the sensitivity of the parameters plays a vital role in finding the solution of the inverse problem. The sensitivity makes the solution unstable, as a small change in the inputs x can lead to a significant change in the estimated model [22,50]. ...
... x M AP or mode as statistically named is quantified. MAP represents the values of inferred parameters with the highest probabilities of occurrence, 22) and in this case, there is no need to calculate the normalization factor z [57]. ...
Thesis
Full-text available
This work presents a robust status monitoring approach for detecting damage in cantilever structures based on logistic functions. Also, a stochastic damage identification approach based on changes of eigenfrequencies is proposed. The proposed algorithms are verified using catenary poles of electrified railways track. The proposed damage features overcome the limitation of frequency-based damage identification methods available in the literature, which are valid to detect damage in structures to Level 1 only. Changes in eigenfrequencies of cantilever structures are enough to identify possible local damage at Level 3, i.e., to cover damage detection, localization, and quantification. The proposed algorithms identified the damage with relatively small errors, even at a high noise level.
... These indices are an invaluable tool in many SA settings [15], for example in factor prioritization (reducing uncertainty), factor fixing (identifying non-influential variables), risk minimization, reliability engineering, etc. They are also helpful to select good dimension orderings that lead to more compact surrogate models (example 5.8 by Bigoni [16]; also considered in [11]). They are hyperedges of a hypergraph, since they encode n-ary relations within subsets of {1, ..., N }. ...
... being C an N -dimensional tensor containing the expansion weights. Sudret [2] established a connection between the Sobol decomposition and the PCE that has gained significant popularity [11,16,44]. The author proposed the indices SU α α α , which approximate each Sobol coefficient S α α α from a PCE surrogate of bounded degree. ...
Preprint
Sobol indices are a widespread quantitative measure for variance-based global sensitivity analysis, but computing and utilizing them remains challenging for high-dimensional systems. We propose the tensor train decomposition (TT) as a unified framework for surrogate modeling and global sensitivity analysis via Sobol indices. We first overview several strategies to build a TT surrogate of the unknown true model using either an adaptive sampling strategy or a predefined set of samples. We then introduce and derive the Sobol tensor train, which compactly represents the Sobol indices for all possible joint variable interactions which are infeasible to compute and store explicitly. Our formulation allows efficient aggregation and subselection operations: we are able to obtain related indices (closed, total, and superset indices) at negligible cost. Furthermore, we exploit an existing global optimization procedure within the TT framework for variable selection and model analysis tasks. We demonstrate our algorithms with two analytical engineering models and a parallel computing simulation data set.
... The Sobol decomposition [Sob90], also known as high-dimensional model representation (HDMR) [BEK15] or ANOVA decomposition [ES81] is one of the most important and widely used variancebased GSA approaches. The Sobol decomposition writes any squared-integrable multidimensional function f : R N → R as a sum of subfunctions: ...
... The Sobol indices Sα α α arise from normalizing the Vα α α by the total variance D, i.e., are a mapping S : P({1, ..., N}) → [0, 1]: Sα α α := Vα α α/D with ∑ α α α Sα α α = 1 These indices are an invaluable tool in many GSA settings [STCR04], for example in factor prioritization (reducing uncertainty), factor fixing (identifying non-influential variables), risk minimization, reliability engineering, etc. They are also helpful to select good dimension orderings that lead to more compact surrogate models (example 5.8 by Bigoni [BEK15]; also considered in [DKLM14]). They are hyperedges of a hypergraph, since they encode n-ary relations within subsets of {1, ..., N}. ...
Article
Full-text available
Sobol's method is one of the most powerful and widely used frameworks for global sensitivity analysis, and it maps every possible combination of input variables to an associated Sobol index. However, these indices are often challenging to analyze in depth, due in part to the lack of suitable, flexible enough, and fast‐to‐query data access structures as well as visualization techniques. We propose a visualization tool that leverages tensor decomposition, a compressed data format that can quickly and approximately answer sophisticated queries over exponential‐sized sets of Sobol indices. This way, we are able to capture the complete global sensitivity information of high‐dimensional scalar models. Our application is based on a three‐stage visualization, to which variables to be analyzed can be added or removed interactively. It includes a novel hourglass‐like diagram presenting the relative importance for any single variable or combination of input variables with respect to any composition of the rest of the input variables. We showcase our visualization with a range of example models, whereby we demonstrate the high expressive power and analytical capability made possible with the proposed method.
... Unlike finite difference methods, AD computes derivatives analytically, ensuring high accuracy and computational efficiency. This technique has been successfully applied in fields such as design optimization [40], machine learning [51], optimal control [21], inverse problems [60], and uncertainty quantification using adjoint-based formulations [7]. In computational fluid dynamics (CFD), for example, AD has enabled the development of gradient-based optimization algorithms that significantly improve the design of aerodynamic and hydrodynamic systems [29,61]. ...
Preprint
Full-text available
Accurately predicting wave-structure interactions is critical for the effective design and analysis of marine structures. This is typically achieved using solvers that employ the boundary element method (BEM), which relies on linear potential flow theory. Precise estimation of the sensitivity of these interactions is equally important for system-level applications such as design optimization. Current BEM solvers are unable to provide these sensitivities as they are not differentiable. To address these challenges, we have developed a fully-differentiable BEM solver for marine hydrodynamics, capable of calculating diffraction and radiation coefficients, and their derivatives with high accuracy. This new solver implements both direct and indirect BEM formulations and incorporates two Green's function expressions, offering a trade-off between accuracy and computational speed. Gradients are computed using reverse-mode automatic differentiation (AD) within the Julia programming language. As a first case study, we analyze two identical floating spheres, evaluating gradients with respect to physical dimensions, inter-sphere distance, and wave frequency. Validation studies demonstrate excellent agreement between AD-computed gradients and finite-difference results. In a second case study, we leverage AD-computed gradients to optimize the mechanical power production of a pair of wave energy converters (WECs). This represents the first application of gradients in WEC power optimization, offering valuable insights into hydrodynamic interactions and advancing the understanding of layout optimization for maximum efficiency. Beyond power optimization, the differentiable BEM solver highlights the potential of AD for offshore design studies.
... A standard assumption is that reality will not move significantly away from these predictions; meaning that small perturbations of the system will only cause small perturbations of the predictions. 2 However, only within a probabilistic framework can such assumptions be deemed reasonable. ...
Article
Full-text available
Offshore wind power has been in the spotlight among renewable energy sources. The current trends of increased power ratings and longer blades come together with the aim to reduce energy costs by design optimisation. The standard approach to deal with uncertainties in wind‐turbine design has been by the use of characteristic values and safety factors. This paper focusses on modelling the effect of structural and aerodynamic uncertainties in blades. First, the uncertainties in laminate properties are characterised and propagated in a blade structural model by means of a Monte Carlo simulation. Wind tunnel measurement data are then used to define the variability in lift and drag coefficients for both clean and rough aerofoil behaviour, which is then used to extrapolate rough behaviours throughout the blade. A stochastic spatial interpolation parameter is used to define the evolution of the degradation level. The combined effect and the variance contribution of these two uncertainty sources in turbine loads is finally defined by aeroelastic turbine simulation. This research aims to provide a framework to deal with uncertainties in wind‐turbine blade design and understand their effects in turbine behaviour.
... Sampling based techniques, such as Markov Chain Monte Carlo (MCMC) methods and bootstrapping, have seen use in epidemic modelling as seen in the studies [9,11,5], and by the expert group 1 providing the Covid-19 related modelling for the Danish government. We propose an alternative approach called generalized Polynomial Chaos [4,13,12,2] as an efficient general non-iterative framework to do UQ-analysis using forward modelling where the uncertainties are parameterized; the outcome being a prediction in terms of the solution's expected value and uncertainty in terms of the solution's variance. ...
Preprint
Full-text available
In the political decision process and control of COVID-19 (and other epidemic diseases), mathematical models play an important role. It is crucial to understand and quantify the uncertainty in models and their predictions in order to take the right decisions and trustfully communicate results and limitations. We propose to do uncertainty quantification in SIR-type models using the efficient framework of generalized Polynomial Chaos. Through two particular case studies based on Danish data for the spread of Covid-19 we demonstrate the applicability of the technique. The test cases are related to peak time estimation and superspeading and illustrate how very few model evaluations can provide insightful statistics.
Article
Full-text available
This paper presents the results of an Uncertainty Quantification and Sensitivity Analysis carried out for the − SST turbulence model applied to the bi-dimensional study of Vortex Induced Vibrations of an elastically mounted cylinder. The turbulence model parameters are treated as epistemic uncertain variables and the forward propagation of uncertainty is evaluated using stochastic expansions based on non-intrusive polynomial chaos. The relative contribution of the closure coefficients to the total uncertainty of the output quantities of interest, the non-dimensional amplitude and the frequency ratio, is evaluated using the Sobol indices. The analysis is repeated for different orders of the polynomial chaos expansion. A set of significant coefficients, which contribute most to the uncertainty for this specific case is identified, and furthermore compared with the sets provided for some other selected flow problems in order to gain further insight on the − SST turbulence model.
Article
This paper investigates the application of tensor decomposition and the stochastic Galerkin method for the uncertainty quantification of complex systems characterized by high parameter dimensionality. By employing these methods, we construct surrogate models aimed at efficiently predicting system output uncertainty. The effectiveness of our approaches is demonstrated through a comparative analysis of accuracy and CPU cost with conventional Galerkin methods, using two transmission line circuit examples with up to 25 parameters.
Article
Full-text available
The use of epidemic modelling in connection with spread of diseases plays an important role in understanding dynamics and providing forecasts for informed analysis and decision-making. In this regard, it is crucial to quantify the effects of uncertainty in the modelling and in model-based predictions to trustfully communicate results and limitations. We propose to do efficient uncertainty quantification in compartmental epidemic models using the generalized Polynomial Chaos (gPC) framework. This framework uses a suitable polynomial basis that can be tailored to the underlying distribution for the parameter uncertainty to do forward propagation through efficient sampling via a mathematical model to quantify the effect on the output. By evaluating the model in a small number of selected points, gPC provides illuminating statistics and sensitivity analysis at a low computational cost. Through two particular case studies based on Danish data for the spread of Covid-19, we demonstrate the applicability of the technique. The test cases consider epidemic peak time estimation and the dynamics between superspreading and partial lockdown measures. The computational results show the efficiency and feasibility of the uncertainty quantification techniques based on gPC, and highlight the relevance of computational uncertainty quantification in epidemic modelling.
ResearchGate has not been able to resolve any references for this publication.