Article

High-Dimensional Uncertainty Quantification Using Stochastic Galerkin and Tensor Decomposition

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper investigates the application of tensor decomposition and the stochastic Galerkin method for the uncertainty quantification of complex systems characterized by high parameter dimensionality. By employing these methods, we construct surrogate models aimed at efficiently predicting system output uncertainty. The effectiveness of our approaches is demonstrated through a comparative analysis of accuracy and CPU cost with conventional Galerkin methods, using two transmission line circuit examples with up to 25 parameters.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This paper presents an iterative and decoupled perturbative stochastic Galerkin (SG) method for the variability analysis of stochastic linear circuits with a large number of uncertain parameters. State-of-the-art implementations of polynomial chaos expansion and SG projection produce a large deterministic circuit that is fully coupled, thus becoming cumbersome to implement and inefficient to solve when the number of random parameters is large. In a perturbative approach, component variability is interpreted as a perturbation of its nominal value. The relaxation of the resulting equations and the application of a SG method lead to a decoupled system of equations, corresponding to a modified equivalent circuit in which each stochastic component is replaced by the nominal element equipped with a parallel current source accounting for the effect of variability. The solution of the perturbation problem is carried out in an iterative manner by suitably updating the equivalent current sources by means of Jacobi- or Gauss-Seidel strategies, until convergence is reached. A sparse implementation allows avoiding the refinement of negligible coefficients, yielding further efficiency improvement. Moreover, for time-invariant circuits, the iterations are effectively performed in post-processing after characterizing the circuit in time or frequency domain by means of a limited number of simulations. Several application examples are used to illustrate the proposed technique and highlight its performance and computational advantages.
Article
Full-text available
We first investigate the structure of the systems derived from the gPC based stochastic Galerkin method for the nonlinear hyperbolic systems with random inputs. This method adopts a generalized Polynomial Chaos (gPC) approximations in the stochastic Galerkin framework, but such approximations to the nonlinear hyperbolic systems do not necessarily yield hyperbolic systems \cite{Lucor2013}. Thus based on the work in \cite{framework}, we propose a framework to carry out the model reduction for the general nonlinear hyperbolic system to derive a final global system. Within this framework, the nonlinear hyperbolic system in one space dimension and the symmetric hyperbolic system in multiple space dimensions are reduced into a symmetric hyperbolic system based on the stochastic Galerkin method. We note that the basis functions in the expansion are not restricted to the random-dependent polynomials as that in gPC method and there is no restriction on the dimensions of the random variables neither.
Article
Full-text available
This letter proposes a general and effective decoupled technique for the stochastic simulation of nonlinear circuits via polynomial chaos. According to the standard framework, stochastic circuit waveforms are still expressed as expansions of orthonormal polynomials. However, by using a point-matching approach instead of the traditional stochastic Galerkin method, a transformation is introduced that renders the polynomial chaos coefficients decoupled and therefore obtainable via repeated non-intrusive simulations and an inverse linear transformation. As discussed throughout the letter, the proposed technique overcomes several limitations of state-of-the-art methods. In particular, the scalability is hugely improved and tens of random parameters can be simultaneously treated within the polynomial chaos framework. Validating application examples are provided that concern the statistical analysis of microwave amplifiers with up to 25 random parameters.
Thesis
Full-text available
The systematic quantification of the uncertainties affecting dynamical systems and the characterization of the uncertainty of their outcomes is critical for engineering design and analysis, where risks must be reduced as much as possible. Uncertainties stem naturally from our limitations in measurements, predictions and manufacturing, and we can say that any dynamical system used in engineering is subject to some of these uncertainties. The first part of this work presents an overview of the mathematical framework used in Uncertainty Quantification (UQ) analysis and introduces the spectral tensor-train (STT) decomposition, a novel high-order method for the effective propagation of uncertainties which aims at providing an exponential convergence rate while tackling the curse of dimensionality. The curse of dimensionality is a problem that afflicts many methods based on meta-models, for which the computational cost increases exponentially with the number of inputs of the approximated function – which we will call dimension in the following. The STT-decomposition is based on the Polynomial Chaos (PC) approximation and the low-rank decomposition of the function describing the Quantity of Interest of the considered problem. The low-rank decomposition is obtained through the discrete tensor-train decomposition, which is constructed using an optimization algorithm for the selection of the relevant points on which the function needs to be evaluated. The selection of these points is informed by the approximated function and thus it is able to adapt to its features. The number of function evaluations needed for the construction grows only linearly with the dimension and quadratically with the rank. In this work we will present and use the functional counterpart of this low-rank decomposition and, after proving some auxiliary properties, we will apply PC on it, obtaining the STT-decomposition. This will allow the decoupling of each dimension, leading to a much cheaper construction of the PC surrogate. In the associated paper, the capabilities of the STT-decomposition are checked on commonly used test functions and on an elliptic problem with random inputs. This work will also present three active research directions aimed at improving the efficiency of the STT-decomposition. In this context, we propose three new strategies for solving the ordering problem suffered by the tensor-train decomposition, for computing better estimates with respect to the norms usually employed in UQ and for the anisotropic adaptivity of the method. The second part of this work presents engineering applications of the UQ framework. Both the applications are characterized by functions whose evaluation is computationally expensive and thus the UQ analysis of the associated systems will benefit greatly from the application of methods which require few function evaluations. We first consider the propagation of the uncertainty and the sensitivity analysis of the non-linear dynamics of railway vehicles with suspension components whose characteristics are uncertain. These analysis are carried out using mostly PC methods, and resorting to random sampling methods for comparison and when strictly necessary. The second application of the UQ framework is on the propagation of the uncertainties entering a fully non-linear and dispersive model of water waves. This computationally challenging task is tackled with the adoption of state-of-the-art software for its numerical solution and of efficient PC methods. The aim of this study is the construction of stochastic benchmarks where to test UQ methodologies before being applied to full-scale problems, where efficient methods are necessary with today’s computational resources. The outcome of this work was also the creation of several freely available Python modules for Uncertainty Quantification, which are listed and described in the appendix.
Article
Full-text available
The accuracy and the computational efficiency of a Point-Collocation Non-Intrusive Polynomial Chaos (NIPC) method applied to stochastic problems with multiple uncertain input variables has been investigated. Two stochastic model problems with multiple uni-form random variables were studied to determine the effect of different sampling methods (Random, Latin Hypercube, and Hammersley) for the selection of the collocation points. The effect of the number of collocation points on the accuracy of polynomial chaos expan-sions were also investigated. The results of the stochastic model problems show that all three sampling methods exhibit a similar performance in terms of the the accuracy and the computational efficiency of the chaos expansions. It has been observed that using a number of collocation points that is twice more than the minimum number required gives a better approximation to the statistics at each polynomial degree. This improvement can be related to the increase of the accuracy of the polynomial coefficients due to the use of more information in their calculation. The results of the stochastic model problems also indicate that for problems with multiple random variables, improving the accuracy of polynomial chaos coefficients in NIPC approaches may reduce the computational expense by achieving the same accuracy level with a lower order polynomial expansion. To demonstrate the application of Point-Collocation NIPC to an aerospace problem with multiple uncertain input variables, a stochastic computational aerodynamics problem which includes the nu-merical simulation of steady, inviscid, transonic flow over a three-dimensional wing with an uncertain free-stream Mach number and angle of attack has been studied. For this study, a 5 th degree Point-Collocation NIPC expansion obtained with Hammersley sampling was capable of estimating the statistics at an accuracy level of 1000 Latin Hypercube Monte Carlo simulations with a significantly lower computational cost.
Article
Full-text available
Hierarchical uncertainty quantification can reduce the computational cost of stochastic circuit simulation by employing spectral methods at different levels. This paper presents an efficient framework to simulate hierarchically some challenging stochastic circuits/systems that include high-dimensional subsystems. Due to the high parameter dimensionality, it is challenging to both extract surrogate models at the low levels of the design hierarchy and to handle them in the high-level simulation. In this paper, we develop an efficient ANOVA-based stochastic circuit/MEMS simulator to extract efficiently the surrogate models at the low level. In order to avoid the curse of dimensionality, we employ tensor-train decomposition at the high level to construct the basis functions and Gauss quadrature points. As a demonstration, we verify our algorithm on a stochastic oscillator with four MEMS capacitors and 184 random parameters. This challenging example is efficiently simulated by our simulator at the cost of only 10 minutes in MATLAB on a regular personal computer.
Article
Full-text available
This paper presents a systematic approach for the statistical simulation of nonlinear networks with uncertain circuit elements. The proposed technique is based on spectral expansions of the elements' constitutive equations (I–V characteristics) into polynomial chaos series and applies to arbitrary circuit components, both linear and nonlinear. By application of a stochastic Galerkin method, the stochastic problem is cast in terms of an augmented set of deterministic constitutive equations relating the voltage and current spectral coefficients. These new equations are given a circuit interpretation in terms of equivalent models that can be readily implemented in SPICE-type simulators, as such allowing to take full advantage of existing algorithms and available built-in models for complex devices, like diodes and MOSFETs. The pertinent statistical information of the entire nonlinear network is retrieved via a single simulation. This approach is both accurate and efficient with respect to traditional techniques, such as Monte Carlo sampling. Application examples, including the analysis of a diode rectifier, a CMOS logic gate and a low-noise amplifier, validate the methodology and conclude the paper.
Article
Full-text available
This letter presents a novel methodology for the stochastic simulation of interconnects illuminated by random external fields. The proposed strategy is based on the polynomial expansion of the classical forcing terms describing the coupling of external fields onto transmission lines. This method turns out to be accurate and is much faster than traditional solutions like the Monte Carlo (MC) method in determining statistical parameters of interest. The advantages of the proposed approach are demonstrated by means of comparisons with MC simulations in the case of incident plane waves with arbitrary amplitude, polarization, or direction of incidence.
Article
Full-text available
This paper presents an alternative modeling strategy for the stochastic analysis of high-speed interconnects. The proposed approach takes advantage of the polynomial chaos framework and a fully SPICE-compatible formulation to avoid repeated circuit simulations, thereby alleviating the computational burden associated with traditional sampling-based methods such as Monte Carlo. Nonetheless, the technique offers very good accuracy and the opportunity to easily simulate complex interconnect topologies which include lossy and dispersive transmission lines, thus overcoming the limitations of previous formulations. Application examples involving the stochastic analysis of on-chip and on-board interconnects validate the methodology proposed.
Article
Full-text available
Polynomial chaos (PC) expansions are used in stochastic finite element analysis to represent the random model response by a set of coefficients in a suitable (so-called polynomial chaos) basis. The number of terms to be computed grows dramatically with the size of the input random vector, which makes the computational cost of classical solution schemes (may it be intrusive (i.e. of Galerkin type) or non intrusive) unaffordable when the deterministic finite element model is expensive to evaluate.To address such problems, the paper describes a non intrusive method that builds a sparse PC expansion. First, an original strategy for truncating the PC expansions, based on hyperbolic index sets, is proposed. Then an adaptive algorithm based on least angle regression (LAR) is devised for automatically detecting the significant coefficients of the PC expansion. Beside the sparsity of the basis, the experimental design used at each step of the algorithm is systematically complemented in order to avoid the overfitting phenomenon. The accuracy of the PC metamodel is checked using an estimate inspired by statistical learning theory, namely the corrected leave-one-out error. As a consequence, a rather small number of PC terms are eventually retained (sparse representation), which may be obtained at a reduced computational cost compared to the classical “full” PC approximation. The convergence of the algorithm is shown on an analytical function. Then the method is illustrated on three stochastic finite element problems. The first model features 10 input random variables, whereas the two others involve an input random field, which is discretized into 38 and 30 − 500 random variables, respectively.
Article
Full-text available
A simple nonrecursive form of the tensor decomposition in d dimensions is presented. It does not inherently suffer from the curse of dimensionality, it has asymptotically the same number of parameters as the canonical decomposition, but it is stable and its computation is based on low-rank approximation of auxiliary unfolding matrices. The new form gives a clear and convenient way to implement all basic operations efficiently. A fast rounding procedure is presented, as well as basic linear algebra operations. Examples showing the benefits of the decomposition are given, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator.
Article
Full-text available
We present a new method for solving stochastic differential equations based on Galerkin projections and extensions of Wiener's polynomial chaos. Specifically, we represent the stochastic processes with an optimum trial basis from the Askey family of orthogonal polynomials that reduces the dimensionality of the system and leads to exponential convergence of the error. Several continuous and discrete processes are treated, and numerical examples show substantial speed-up compared to Monte Carlo simulations for low dimensional stochastic inputs.
Article
“Curse of dimensionality” has become the major challenge for existing high-sigma yield analysis methods. In this article, we develop a meta-model using Low-Rank Tensor Approximation (LRTA) to substitute expensive SPICE simulation. The polynomial degree of our LRTA model grows linearly with the circuit dimension. This makes it especially promising for high-dimensional circuit problems. Our LRTA meta-model is solved efficiently with a robust greedy algorithm and calibrated iteratively with a bootstrap-assisted adaptive sampling method. We also develop a novel global sensitivity analysis approach to generate a reduced LRTA meta-model which is more compact. It further accelerates the procedure of model calibration and yield estimation. Experiments on memory and analog circuits validate that the proposed LRTA method outperforms other state-of-the-art approaches in terms of accuracy and efficiency.
Article
Fabrication process variations can significantly influence the performance and yield of nanoscale electronic and photonic circuits. Stochastic spectral methods have achieved great success in quantifying the impact of process variations, but they suffer from the curse of dimensionality. Recently, low-rank tensor methods have been developed to mitigate this issue, but two fundamental challenges remain open: how to automatically determine the tensor rank and how to adaptively pick the informative simulation samples. This article proposes a novel tensor-regression method to address these two challenges. We use an q/2\ell _{q}/ \ell _{2} group-sparsity regularization to determine the tensor rank. The resulting optimization problem can be efficiently solved via an alternating minimization solver. We also propose a two-stage adaptive sampling method to reduce the simulation cost. Our method considers both exploration and exploitation via the estimated Voronoi cell volume and nonlinearity measurement, respectively. The proposed model is verified with synthetic and some realistic circuit benchmarks, on which our method can well capture the uncertainty caused by 19–100 random variables with only 100–600 simulation samples.
Article
The temperature developed in bondwires of integrated circuits (ICs) is a possible source of malfunction, and has to be taken into account during the design phase of an IC. Due to manufacturing tolerances, a bondwire's geometrical characteristics are uncertain parameters, and as such their impact has to be examined with the use of uncertainty quantification (UQ) methods. Sampling methods, like the Monte Carlo (MC), converge slowly, while efficient alternatives scale badly with respect to the number of considered uncertainties. Possible remedies to this, so-called, curse of dimensionality are sought in the application of stochastic collocation (SC) on sparse grids (SGs) and of the recently emerged low-rank tensor decomposition methods, with emphasis on the tensor train (TT) decomposition.
Conference Paper
In this paper a novel improvement to the polynomial chaos (PC) approach for the uncertainty analysis of high speed circuits is presented. The key feature of this work is the development of an alternative hyperbolic truncation scheme to replace the conventional linear truncation scheme used in generation of PC expansions. This hyperbolic truncation scheme results in a sparse PC expansion for marginal loss of accuracy. The computational effort required to evaluate the coefficients of the resultant sparse expansion is only a small fraction of that required for full-blown PC expansions. A greedy adaptive methodology to determine the number of basis terms and evaluate the corresponding coefficients for the sparse expansion is also presented. This approach is validated on a nonlinear radio-frequency (RF) circuit against conventional full-blown PC methods.
Chapter
This chapter introduces the reader to fundamental issues that arise when applying the Monte Carlo method to solving a commonly encountered problem in numerical computation. In its most basic form the problem is to evaluate the volume of a bounded region in multi-dimensional euclidean space. The more general problem is to evaluate the integral of a function on such a region. The Monte Carlo method often offers a competitive and sometimes the only useful solution to the problem. The appeal of the Monte Carlo method arises when the shape of the region of interest makes solution by analytical methods impossible and, in the case of the more general function integration, when little is known about the smoothness and variational properties of the integrand, or what is known precludes the applications of alternative numerical evaluation techniques.
Article
A adapted tensor-structured GMRES method for the TT format is proposed and investigated. The Tensor Train (TT) approximation is a robust approach to high-dimensional problems. One class of problems is solution of a linear system. In this work we study the convergence of the GMRES method in the presence of tensor approximations and provide relaxation techniques to improve its performance. Several numerical examples are presented. The method is also compared with a projection TT linear solver based on the ALS and DMRG methods. On a particular sPDE (high-dimensional parametric) problem, these methods manifest comparable performance, with a good preconditioner the TT-GMRES overcomes the ALS solver.
Article
Little has been done in the study of these intriguing questions, and I do not wish to give the impression that any extensive set of ideas exists that could be called a "theory." What is quite surprising, as far as the histories of science and philosophy are concerned, is that the major impetus for the fantastic growth of interest in brain processes, both psychological and physiological, has come from a device, a machine, the digital computer. In dealing with a human being and a human society, we enjoy the luxury of being irrational, illogical, inconsistent, and incomplete, and yet of coping. In operating a computer, we must meet the rigorous requirements for detailed instructions and absolute precision. If we understood the ability of the human mind to make effective decisions when confronted by complexity, uncertainty, and irrationality then we could use computers a million times more effectively than we do. Recognition of this fact has been a motivation for the spurt of research in the field of neurophysiology.
Article
We investigate the convergence rate of approximations by finite sums of rank-1 tensors of solutions of multiparametric elliptic PDEs. Such PDEs arise, for example, in the parametric, deterministic reformulation of elliptic PDEs with random field inputs, based, for example, on the M-term truncated Karhunen-Loève expansion. Our approach could be regarded as either a class of compressed approximations of these solutions or as a new class of iterative elliptic problem solvers for high-dimensional, parametric, elliptic PDEs providing linear scaling complexity in the dimension M of the parameter space. It is based on rank-reduced, tensor-formatted separable approximations of the high-dimensional tensors and matrices involved in the iterative process, combined with the use of spectrally equivalent low-rank tensor-structured preconditioners to the parametric matrices resulting from a finite element discretization of the high-dimensional parametric, deterministic problems. Numerical illustrations for the M-dimensional parametric elliptic PDEs resulting from sPDEs on parameter spaces of dimensions M100M\leq100 indicate the advantages of employing low-rank tensor-structured matrix formats in the numerical solution of such problems.
Article
A methodology for efficient tolerance analysis of electronic circuits based on nonsampling stochastic simulation of transients is formulated, implemented, and validated. We model the stochastic behavior of all quantities that are subject to tolerance spectrally with polynomial chaos. A library of stochastic models of linear and nonlinear circuit elements is created. In analogy to the deterministic implementation of the SPICE electronic circuit simulator, the overall stochastic circuit model is obtained using nodal analysis. In the proposed case studies, we analyze the influence of device tolerance on the response of a lowpass filter, the impact of temperature variability on the output of an amplifier, and the effect of changes of the load of a diode bridge on the probability density function of the output voltage. The case studies demonstrate that the novel methodology is computationally faster than the Monte Carlo method and more accurate and flexible than the root-sum-square method. This makes the stochastic circuit simulator, referred to as PolySPICE, a compelling candidate for the tolerance study of reliability-critical electronic circuits.
Article
This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or N-way array. Decompositions of higher-order tensors (i.e., N-way arrays with N 3) have applications in psychomet- rics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank- one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
Efficient Hermite-based variability analysis using decoupling technique
  • T A Pham
Progress of tensor-based high-dimensional uncertainty quantification of process variations
  • Z He
  • Z Zhang
Adjoint sensitivity analysis algorithms for general circuits with distributed multiconductor transmission lines
  • A S Saini
Uncertainty quantification for integrated circuits and microelectrornechanical systems
  • Z Zhang