Article

An ℋ 2 ⊗ ℒ 2 -Optimal Model Order Reduction Approach for Parametric Linear Time-Invariant Systems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

So far, ℋ2 ⊗ ℒ2‐optimal model order reduction (MOR) of linear time‐invariant systems, preserving the affine parameter dependence, was only considered for special cases by Baur et al in 2011. In this contribution, we present necessary conditions for an ℋ2 ⊗ ℒ2‐optimal parametric reduced order model, for general affine parametric systems resembling the special case investigated by Baur et al.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Exploring potential links with tangential interpolation is an interesting direction for future research, which might help identifying how to choose these directions optimally. The recent work [35] might provide a link in this direction. ...
... where A(0) −1 U (i) ∈ R n×k and A(0) −1 U (i+1) ∈ R n×k . Taking norms and using the submultiplicative property in (35) yield ...
... Remark 1. Equation (35) shows that the changes in the candidate basis lie in Range( ...
Preprint
Nonlinear parametric inverse problems appear in many applications. Here, we focus on diffuse optical tomography (DOT) in medical imaging to recover unknown images of interest, such as cancerous tissue in a given medium, using a mathematical (forward) model. The forward model in DOT is a diffusion-absorption model for the photon flux. The main bottleneck in these problems is the repeated evaluation of the large-scale forward model. For DOT, this corresponds to solving large linear systems for each source and frequency at each optimization step. Moreover, Newton-type methods, often the method of choice, require additional linear solves with the adjoint to compute derivative information. Emerging technology allows for large numbers of sources and detectors, making these problems prohibitively expensive. Reduced order models (ROM) have been used to drastically reduce the system size in each optimization step, while solving the inverse problem accurately. However, for large numbers of sources and detectors, just the construction of the candidate basis for the ROM projection space incurs a substantial cost, as matching the full parameter gradient matrix in interpolatory model reduction requires large linear solves for all sources and frequencies and all detectors and frequencies for each parameter interpolation point. As this candidate basis numerically has low rank, this construction is followed by a rank-revealing factorization that typically reduces the number of vectors in the candidate basis substantially. We propose to use randomization to approximate this basis with a drastically reduced number of large linear solves. We also provide a detailed analysis for the low-rank structure of the candidate basis for our problem of interest. Even though we focus on the DOT problem, the ideas presented are relevant to many other large scale inverse problems and optimization problems.
... B(µ) T P(µ)B(µ))dµ, where P(µ) solves A T P(µ) + P(µ)A + C(µ) T C(µ) = 0[38]. Discretizing (3.15) using a suitable quadrature formula with nodes {µ i } and weights {ω i }, i = 1, . . . ...
... In[5,38] b and c are scalar functions. ...
Preprint
Full-text available
Rational Krylov subspaces have become a reference tool in dimension reduction procedures for several application problems. When data matrices are symmetric, a short-term recurrence can be used to generate an associated orthonormal basis. In the past this procedure was abandoned because it requires twice the number of linear system solves per iteration than with the classical long-term method. We propose an implementation that allows one to obtain key rational subspace matrices without explicitly storing the whole orthonormal basis, with a moderate computational overhead associated with sparse system solves. Several applications are discussed to illustrate the advantages of the proposed procedure.
... The H 2 ⊗ L 2 FONC, which we first described in [15], is a direct consequence of Theorem 3.1. Hence, we restate the H 2 ⊗ L 2 FONC here as the following corollary. ...
... Due to the similarity of the equations in (25) with the Wilson conditions (8) for non-parametric systems, in [15] (and later in [19, Theorem 6.11]) we referred to (25) as Wilson-type optimality conditions, but for conciseness, we use FONC in this paper. ...
Preprint
In this paper, we generalize existing frameworks for H2L2\mathcal{H}_2\otimes\mathcal{L}_2-optimal model order reduction to a broad class of parametric linear time-invariant systems. To this end, we derive first-order necessary ptimality conditions for a class of structured reduced-order models, and then building on those, propose a stability-preserving optimization-based method for computing locally H2L2\mathcal{H}_2\otimes\mathcal{L}_2-optimal reduced-order models. We also make a theoretical comparison to existing approaches in the literature, and in numerical experiments, show how our new method, with reasonable computational effort, produces stable optimized reduced-order models with significantly lower approximation errors.
... Then, one can try to construct V directly to minimize this composite measure. We refer the reader to [4] and more recent works [17], [21] in this direction for the unstructured setting. ...
... Parameter sampling for constructing the local bases was not the focus of this work. Any efficient parameter selection methodology can be incorporated into our framework and will be considered in a future together with the recent composite H 2 × L 2 -optimal basis constructions [17], [21]. Extensions to the nonlinear parametric setting is also an important topic to consider. ...
Preprint
We develop a structure-preserving parametric model reduction approach for linearized swing equations where parametrization corresponds to variations in operating conditions. We employ a global basis approach to develop the parametric reduced model in which we concatenate the local bases obtained via H2\mathcal{H}_2-based interpolatory model reduction. The residue of the underlying dynamics corresponding to the simple pole at zero varies with the parameters. Therefore, to have bounded H2\mathcal{H}_2 and H\mathcal{H}_\infty errors, the reduced model residue for the pole at zero should match the original one over the entire parameter domain. Our framework achieves this goal by enriching the global basis based on a residue analysis. The effectiveness of the proposed method is illustrated through two numerical examples.
... The best location for the next local model is determined per step, based on maximizing error measures on a discrete training set of parameter samples. Such approach does not provide a point selection, which is strictly optimal with respect to some norm, as the methods of [5,25,29]. However, no integration of error measures over the whole parameter space is necessary or the repetitive factorization of K d at all sampling points; thus greedy approaches are attractive for the application to large-scale industry FOMs. ...
Chapter
A parametric model order reduction approach for the frequency-domain analysis of complex industry models is presented. Linear time-invariant subsystem models are reduced for the use in domain integration approaches in the context of structural dynamics. These subsystems have a moderate number of resonances in the considered frequency band but a high-dimensional input parameter space and a large number of states. A global basis approach is chosen for model order reduction, in combination with an optimization-based greedy search strategy for the model training. Krylov subspace methods are employed for local basis generation, and a goal-oriented error estimate based on residual expressions is developed as the optimization objective. As the optimization provides solely local maxima of the non-convex error in parameter space, an in-situ and a-posteriori error evaluation strategy is combined. On the latter, a statistical error evaluation is performed based on Bayesian inference. The method finally enables parametric model order reduction for industry finite element models with complex modeling techniques and many degrees of freedom. After discussing the method on a beam example, this is demonstrated on an automotive example.
... Except for special cases [9,34], how one chooses optimal parameter sampling points with respect to a joint global frequency-parameter error measure has not been known until recently. In [43], Hund et al. tackle this joint-optimization problem by deriving optimality conditions and then constructing model reduction bases that enforce those conditions. The most widely used approaches for global basis construction in pMOR are greedy or optimization-based sampling strategies; see [14] for a survey. ...
Article
Full-text available
We consider the reduction of parametric families of linear dynamical systems having an affine parameter dependence that allow for low-rank variation in the state matrix. Usual approaches for parametric model reduction typically involve exploring the parameter space to identify representative parameter values and the associated models become the principal focus of model reduction methodology. These models are then combined in various ways in order to interpolate the response. The initial exploration of the parameter space can be a forbiddingly expensive task. A different approach is proposed here that requires neither parameter sampling nor parameter space exploration. Instead, we represent the system response function as a composition of four subsystem response functions that are non-parametric with a purely parameter-dependent function. One may apply any one of a number of standard (non-parametric) model reduction strategies to reduce the subsystems independently, and then conjoin these reduced models with the underlying parameterization to obtain the overall parameterized response. Our approach has elements in common with the parameter mapping approach of Baur et al. (PAMM 14(1), 19–22 2014) but offers greater flexibility and potentially greater control over accuracy. In particular, a data-driven variation of our approach is described that exercises this flexibility through the use of limited frequency-sampling of the underlying non-parametric models. The parametric structure of our system representation allows for a priori guarantees of system stability in the resulting reduced models across the full range of parameter values. Incorporation of system theoretic error bounds allows us to determine appropriate approximation orders for the non-parametric systems sufficient to yield uniformly high accuracy across the parameter range. We illustrate our approach on a class of structural damping optimization problems and on a benchmark model of thermal conduction in a semiconductor chip. The parametric structure of our reduced system representation lends itself very well to the development of optimization strategies making use of efficient cost function surrogates. We discuss this in some detail for damping parameter and location optimization for vibrating structures.
Chapter
Interpolatory methods offer a powerful framework for generating reduced-order models (ROMs) for non-parametric or parametric systems with time-varying inputs. Choosing the interpolation points adaptively remains an area of active interest. A greedy framework has been introduced in [12, 14] to choose interpolation points automatically using a posteriori error estimators. Nevertheless, when the parameter range is large or if the parameter space dimension is larger than two, the greedy algorithm may take considerable time, since the training set needs to include a considerable number of parameters. As a remedy, we introduce an adaptive training technique by learning an efficient a posteriori error estimator over the parameter domain. A fast learning process is created by interpolating the error estimator using radial basis functions (RBF) over a fine parameter training set, representing the whole parameter domain. The error estimator is evaluated only on a coarse training set including a few parameter samples. The algorithm is an extension of the work in [9] to interpolatory model order reduction (MOR) in frequency domain. Beyond the work in [9], we use a newly proposed inf-sup-constant-free error estimator in the frequency domain [14], which is often much tighter than the error estimator using the inf-sup constant. Three numerical examples demonstrate the efficiency and validity of the proposed approach.
Article
Motivated by a recently proposed error estimator for the transfer function of the reduced-order model of a given linear dynamical system, we further develop more theoretical results in this work. Moreover, we propose several variants of the error estimator, and compare those variants with the existing ones both theoretically and numerically. It is shown that some of the proposed error estimators perform better than or equally well as the existing ones. All the error estimators considered can be easily extended to estimate the output error of reduced-order modeling for steady linear parametric systems.
Article
Full-text available
We provide a unifying projection-based framework for structure-preserving interpolatory model reduction of parameterized linear dynamical systems, i.e., systems having a structured dependence on parameters that we wish to retain in the reduced-order model. The parameter dependence may be linear or nonlinear and is retained in the reduced-order model. Moreover, we are able to give conditions under which the gradient and Hessian of the system response with respect to the system parameters is matched in the reduced-order model. We provide a systematic approach built on established interpolatory H2\mathcal{H}_2 optimal model reduction methods that will produce parameterized reduced-order models having high fidelity throughout a parameter range of interest. For single input/single output systems with parameters in the input/output maps, we provide reduced-order models that are \emph{optimal} with respect to an H2L2\mathcal{H}_2\otimes\mathcal{L}_2 joint error measure. The capabilities of these approaches are illustrated by several numerical examples from technical applications.
Article
In today's technological world, physical and artificial processes are mainly described by mathematical models, which can be used for simulation or control. These processes are dynamical systems, as their future behavior depends on their past evolution. The weather and very large scale integration (VLSI) circuits are examples, the former physical and the latter artificial. In simulation (control) one seeks to predict (modify) the system behavior; however, simulation of the full model is often not feasible, necessitating simplification of it. Due to limited computational, accuracy, and storage capabilities, system approximation—the development of simplified models that capture the main features of the original dynamical systems—evolved. Simplified models are used in place of original complex models and result in simulation (control) with reduced computational complexity. This book deals with what may be called the curse of complexity, by addressing the approximation of dynamical systems described by a finite set of differential or difference equations together with a finite set of algebraic equations. Our goal is to present approximation methods related to the singular value decomposition (SVD), to Krylov or moment matching methods, and to combinations thereof, referred to as SVD-Krylov methods. Part I addresses the above in more detail. Part II is devoted to a review of the necessary mathematical and system theoretic prerequisites. In particular, norms of vectors and (finite) matrices are introduced in Chapter 3, together with a detailed discussion of the SVD of matrices. The approximation problem in the induced 2-norm and its solution given by the Schmidt—Eckart—Young—Mirsky theorem are tackled next. This result is generalized to linear dynamical systems in Chapter 8, which covers Hankel-norm approximation. Elements of numerical linear algebra are also presented in Chapter 3. Chapter 4 presents some basic concepts from linear system theory. Its first section discusses the external description of linear systems in terms of convolution integrals or convolution sums. The section following treats the internal description of linear systems. This is a representation in terms of first-order ordinary differential or difference equations, depending on whether we are dealing with continuous- or discrete-time systems. The associated structural concepts of reachability and observability are analyzed. Gramians, which are important tools for system approximation, are introduced in this chapter and their properties are explored. The last section of Chapter 4 is concerned with the relationship between internal and external descriptions, which is known as the realization problem. Finally, aspects of the more general problem of rational interpolation are displayed.