Article

Reachability Analysis of Randomly Perturbed Hamiltonian Systems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this paper, we revisit energy-based concepts of controllability and reformulate them for control-affine nonlinear systems perturbed by white noise. Specifically, we discuss the relation between controllability of deterministic systems and the corresponding stochastic control systems in the limit of small noise and in the case in which the target state is a measurable subset of the state space. We derive computable expression for hitting probabilities and mean first hitting times in terms of empirical Gramians, when the dynamics is given by a Hamiltonian system perturbed by dissipation and noise, and provide an easily computable expression for the corresponding controllability function as a function of a subset of the state variables.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
We analyze structure-preserving model order reduction methods for Ornstein–Uhlenbeck processes and linear S(P)DEs with multiplicative noise based on balanced truncation. For the first time, we include in this study the analysis of non-zero initial conditions. We moreover allow for feedback-controlled dynamics for solving stochastic optimal control problems with reduced-order models and prove novel error bounds for a class of linear quadratic regulator problems. We provide numerical evidence for the bounds and discuss the application of our approach to enhanced sampling methods from non-equilibrium statistical mechanics.
Article
Full-text available
Optimal control of diffusion processes is intimately connected to the problem of solving certain Hamilton–Jacobi–Bellman equations. Building on recent machine learning inspired approaches towards high-dimensional PDEs, we investigate the potential of iterative diffusion optimisation techniques, in particular considering applications in importance sampling and rare event simulation, and focusing on problems without diffusion control, with linearly controlled drift and running costs that depend quadratically on the control. More generally, our methods apply to nonlinear parabolic PDEs with a certain shift invariance. The choice of an appropriate loss function being a central element in the algorithmic design, we develop a principled framework based on divergences between path measures, encompassing various existing methods. Motivated by connections to forward-backward SDEs, we propose and study the novel log-variance divergence, showing favourable properties of corresponding Monte Carlo estimators. The promise of the developed approach is exemplified by a range of high-dimensional and metastable numerical examples.
Thesis
Full-text available
This thesis proposes several applications of reachability analysis to control and assess stability of power systems with formal guarantees. Simply put, reachability analysis makes it possible to compute the bounds of all possible trajectories for a range of operating conditions, while simultaneously meeting the practical requirements of realistic systems found in the power industry. Novel methods have been developed in this thesis to exploit the advantages of employing reachability analysis in a wide range of applications. First, we investigate the assessment of transient stability via compositional techniques to improve the algorithmic efficiency of classical reachability algorithms. A special algorithm was developed, capable of drastically reducing the computational efforts associated with existing techniques. This made it possible to establish transient stability of power systems formalized via a set of differential algebraic equations and consisting of more than 100 state variables. Second, we propose an algorithmic procedure that extends existing techniques computing reachable sets, in order to estimate the so-called region of attraction, which is known to be of great importance for the stability analysis of nonlinear systems. The developed method is compared with alternative and dominant techniques in this research area. Third, we present the synthesis and verification of linear-parameter varying controllers in order to robustly establish transient stability of multi-machine power systems with formal guarantees. Both tasks are solved simultaneously in a systematic fashion within the context of a unified framework. Several benchmark examples are considered to showcase the applicability and scalability of the proposed approach. Finally, we illustrate how reachability analysis can be utilized to verify safety of critical components found in power systems. In particular, we consider a realistic configuration of a boiler system within a combined cycle heat and power plant, in which the loss of the boiler leads to the emergency shut-down of the plant, hence jeopardizing safety of the complete utility grid. The task of verifying safety of the boiler cannot be achieved using numerical time-domain simulations, since only a single trajectory out of infinitely many can be checked at a time.
Article
Full-text available
This paper is the first part of the project devoted to studying the interconnection between controllability properties of a dynamical system and large-time asymptotics of trajectories for the associated stochastic system. It is proved that the approximate controllability to a given point and solid controllability from the same point imply the uniqueness of a stationary measure and exponential mixing in the total variation metric. This result is then applied to random differential equations on a compact Riemannian manifold. In the second part, we shall replace the solid con-trollability by a stabilisability condition and prove that it is still sufficient for the uniqueness of a stationary distribution, whereas the convergence to it holds in the weaker dual-Lipschitz metric.
Article
Full-text available
Control of nonlinear large-scale dynamical networks, e.g., collective behavior of agents interacting via a scale-free connection topology, is a central problem in many scientific and engineering fields. For the linear version of this problem, the so-called controllability Gramian has played an important role to quantify how effectively the dynamical states are reachable by a suitable driving input. In this paper, we first extend the notion of the controllability Gramian to nonlinear dynamics in terms of the Gibbs distribution. Next, we show that, when the networks are open to environmental noise, the newly defined Gramian is equal to the covariance matrix associated with randomly excited, but uncontrolled, dynamical state trajectories. This fact theoretically justifies a simple Monte Carlo simulation that can extract effectively controllable subdynamics in nonlinear complex networks. In addition, the result provides a novel insight into the relationship between controllability and statistical mechanics.
Chapter
Full-text available
The dynamical behavior of many systems arising in physics, chemistry, biology, etc. is dominated by rare but important transition events between long lived states. For over 70 years, transition state theory (TST) has provided the main theoretical framework for the description of these events [17,33,34]. Yet, while TST and evolutions thereof based on the reactive flux formalism [1, 5] (see also [30,31]) give an accurate estimate of the transition rate of a reaction, at least in principle, the theory tells very little in terms of the mechanism of this reaction. Recent advances, such as transition path sampling (TPS) of Bolhuis, Chandler, Dellago, and Geissler [3, 7] or the action method of Elber [15, 16], may seem to go beyond TST in that respect: these techniques allow indeed to sample the ensemble of reactive trajectories, i.e. the trajectories by which the reaction occurs. And yet, the reactive trajectories may again be rather uninformative about the mechanism of the reaction. This may sound paradoxical at first: what more than actual reactive trajectories could one need to understand a reaction? The problem, however, is that the reactive trajectories by themselves give only a very indirect information about the statistical properties of these trajectories. This is similar to why statistical mechanics is not simply a footnote in books about classical mechanics. What is the probability density that a trajectory be at a given location in state-space conditional on it being reactive? What is the probability current of these reactive trajectories? What is their rate of appearance? These are the questions of interest and they are not easy to answer directly from the ensemble of reactive trajectories. The right framework to tackle these questions also goes beyond standard equilibrium statistical mechanics because of the nontrivial bias that the very definition of the reactive trajectories imply – they must be involved in a reaction. The aim of this chapter is to introduce the reader to the probabilistic framework one can use to characterize the mechanism of a reaction and obtain the probability density, current, rate, etc. of the reactive trajectories.
Article
Full-text available
We discuss the relation of a certain type of generalized Lyapunov equations to Gramians of stochastic and bilinear systems together with the corresponding energy functionals. While Gramians and energy functionals of stochastic linear systems show a strong correspondence to the analogous objects for deterministic linear systems, the relation of Gramians and energy functionals for bilinear systems is less obvious. We discuss results from the literature for the latter problem and provide new characterizations of input and output energies of bilinear systems in terms of algebraic Gramians satisfying generalized Lyapunov equations. In any of the considered cases, the definition of algebraic Gramians allows us to compute balancing transformations and implies model reduction methods analogous to balanced truncation for linear deterministic systems. We illustrate the performance of these model reduction methods by showing numerical experiments for different bilinear systems.
Conference Paper
Full-text available
For stochastic hybrid systems, safety verification methods are very little supported mainly because of complexity and difficulty of the associated mathematical problems. The key of the methods that succeeded in solving various instances of this problem is to prove the equivalence of these instances with known problems. In this paper, we apply the same pattern to the most general model of stochastic hybrid systems. Stochastic reachability problem can be treated as an exit problem for a suitable class of Markov processes. The solutions of this problem can be characterised using Hamilton Jacobi theory.
Article
Full-text available
Purpose Nonlinear dynamical systems may, under certain conditions, be represented by a bilinear system. The paper is concerned with the construction of the controllability and observability gramians for the corresponding bilinear system. Such gramians form the core of model reduction schemes involving balancing. Design/methodology/approach The paper examines certain properties of the bilinear system and identifies parameters that capture important information relating to the behaviour of the system. Findings Novel approaches for the determination of approximate constant gramians for use in balancing‐type model reduction techniques are presented. Numerical examples are given which indicate the efficacy of the proposed formulations. Research limitations/implications The systems under consideration are restricted to the so‐called weakly nonlinear systems, i.e. those without strong nonlinearities where the essential type of behaviour of the system is determined by its linear part. Practical implications The suggested methods lead to an improvement in the accuracy of model reduction. Model reduction is a vital aspect of modern system simulation. Originality/value The proposed novel approaches for model reduction are particularly beneficial for the design of controllers for nonlinear systems and for the design of radio‐frequency integrated circuits.
Article
Full-text available
We consider operators of Kramers–Fokker–Planck type in the semi-classical limit such that the exponent of the associated Maxwellian is a Morse function with two local minima and a saddle point. Under suitable additional assumptions we establish the complete asymptotics of the exponentially small splitting between the first two eigenvalues. On considère des opérateurs du type de Kramers–Fokker–Planck dans la limite semi-classique tels que l’exposant du maxwellien associé soit une fonction de Morse avec deux minima et un point selle. Sous des hypothèses supplémentaires convenables on établit un développement asymptotique complet de l’écart exponentiellement petit entre les deux premières valeurs propres.
Article
Full-text available
In this work, probabilistic reachability over a finite horizon is investigated for a class of discrete time stochastic hybrid systems with control inputs. A suitable embedding of the reachability problem in a stochastic control framework reveals that it is amenable to two complementary interpretations, leading to dual algorithms for reachability computations. In particular, the set of initial conditions providing a certain probabilistic guarantee that the system will keep evolving within a desired ‘safe’ region of the state space is characterized in terms of a value function, and ‘maximally safe’ Markov policies are determined via dynamic programming. These results are of interest not only for safety analysis and design, but also for solving those regulation and stabilization problems that can be reinterpreted as safety problems. The temperature regulation problem presented in the paper as a case study is one such case.
Article
Full-text available
Nonlinear model predictive control has become increasingly popular in the chemical process industry. Highly accurate models can now be simulated with modern dynamic simulators combined with powerful optimization algorithms. However, computational requirements grow with the complexity of the models. Many rigorous dynamic models require too much computation time to be useful for real-time model based controllers. One possible solution to this is the application of model reduction techniques. The method introduced here reduces nonlinear systems while retaining most of the input–output properties of the original system. The technique is based on empirical gramians that capture the nonlinear behavior of the system near an operating point. The gramians are then balanced and the less important states reduced via a Galerkin projection which is performed onto the remaining states. This method has the advantage that it only requires linear matrix computations while being applicable to nonlinear systems.
Article
Full-text available
A graph transformation procedure is described that enables waiting times, rate constants, and committor probabilities to be calculated within a single scheme for finite-state discrete-time Markov processes. The scheme is applicable to any transition network where the states, equilibrium occupation probabilities, and transition probabilities are specified. For networks involving many states or slow overall kinetics, the deterministic graph transformation approach is faster and more accurate than direct diagonalization of the transition matrix, kinetic Monte Carlo, or iterative procedures.
Article
Full-text available
The aim of this note is to present an elementary proof of a variation of Harris' ergodic theorem of Markov chains. This theorem, dating back to the fifties essentially states that a Markov chain is uniquely ergodic if it admits a ``small'' set which is visited infinitely often. This gives an extension of the ideas of Doeblin to the unbounded state space setting. Often this is established by finding a Lyapunov function with ``small'' level sets. This topic has been studied by many authors (cf. Harris, Hasminskii, Nummelin, Meyn and Tweedie). If the Lyapunov function is strong enough, one has a spectral gap in a weighted supremum norm (cf. Meyn and Tweedie). Traditional proofs of this result rely on the decomposition of the Markov chain into excursions away from the small set and a careful analysis of the exponential tail of the length of these excursions. There have been other variations which have made use of Poisson equations or worked at getting explicit constants. The present proof is very direct, and relies instead on introducing a family of equivalent weighted norms indexed by a parameter β\beta and to make an appropriate choice of this parameter that allows to combine in a very elementary way the two ingredients (existence of a Lyapunov function and irreducibility) that are crucial in obtaining a spectral gap. The original motivation of this proof was the authors' work on spectral gaps in Wasserstein metrics. The proof presented in this note is a version of our reasoning in the total variation setting which we used to guide the calculations in arXiv:math/0602479. While we initially produced it for that purpose, we hope that it will be of interest in its own right.
Article
In this paper, we present an empirical balanced truncation method for nonlinear systems whose input vector fields are constants. First, we define differential reachability and observability Gramians. They are matrix valued functions of the state trajectory (i.e. the initial state and input trajectory), and it is difficult to find them as functions of the initial state and input. The main result of this paper is to show that for a fixed state trajectory, it is possible to compute the values of these Gramians by using impulse and initial state responses of the variational system. Therefore, balanced truncation is doable along the fixed state trajectory without solving nonlinear partial differential equations, differently from conventional nonlinear balancing methods. We further develop an approximation method, which only requires trajectories of the original nonlinear systems.
Thesis
In this work, we consider non-reversible multi-scale stochastic processes, described by stochastic differential equations, for which we review theory on the convergence behaviour to equilibrium and mean first exit times. Relations between these time scales for non-reversible processes are established, and, by resorting to a control theoretic formulation of the large deviations action functional, even the consideration of hypo-elliptic processes is permitted. The convergence behaviour of the processes is studied in a lot of detail, in particular with respect to initial conditions and temperature. Moreover, the behaviour of the conditional and marginal distributions during the relaxation phase is monitored and discussed as we encounter unexpected behaviour. In the end, this results in the proposal of a data-based partitioning into slow and fast degrees of freedom. In addition, recently proposed techniques promising accelerated convergence to equilibrium are examined and a connection to appropriate model reduction approaches is made. For specific examples this leads to either an interesting alternative formulation of the acceleration procedure or structural insight into the acceleration mechanism. For the model order reduction technique of effective dynamics, which uses conditional expectations, error bounds for non-reversible slow-fast stochastic processes are obtained. A comparison with the reduction method of averaging is undertaken, which, for non-reversible processes, possibly yields different reduced equations. For Ornstein-Uhlenbeck processes sufficient conditions are derived for the two methods (effective dynamics and averaging) to agree in the infinite time scale separation regime. Additionally, we provide oblique projections which allow for the sampling of conditional distributions of non-reversible Ornstein-Uhlenbeck processes.
Chapter
We review results on the exponential convergence of multidimensional Ornstein-Uhlenbeck processes and discuss notions of characteristic time scales by means of concrete model systems. We focus, on the one hand, on exit time distributions and provide explicit expressions for the exponential rate of the distribution in the small-noise limit. On the other hand, we consider relaxation time scales of the process to its equilibrium measure in terms of relative entropy and discuss the connection with exit probabilities. Along these lines, we study examples which illustrate specific properties of the relaxation and discuss the possibility of deriving a simulation-based, empirical definition of slow and fast degrees of freedom which builds upon a partitioning of the relative entropy functional in connection with the observed relaxation behaviour.
Article
In linear system theory, Gramian matrices help quantify the input-tostate and state-to-output interactions in a state space model. In this paper two different existing generalizations of this idea are examined for the case of a stable bilinear system. One method is based on using the L2 norm to measure signal sizes and leads to the notion of the controllability and observability functions, which are referred to collectively as energy functions. The other method is motivated by introducing algebraic generalizations of the linear system Lyapunov equations, the solutions of which are called the algebraic Gramians. While these generalizations are distinct, in this paper some new relationships between the two approaches are presented.
Article
In this article we approach a class of stochastic reachability problems with state constraints from an optimal control perspective. Preceding approaches to solving these reachability problems are either confined to the deterministic setting or address almost-sure stochastic requirements. In contrast, we propose a methodology to tackle problems with less stringent requirements than almost sure. To this end, we first establish a connection between two distinct stochastic reach-avoid problems and three classes of stochastic optimal control problems involving discontinuous payoff functions. Subsequently, we focus on solutions of one of the classes of stochastic optimal control problems - the exit-time problem, which solves both the two reach-avoid problems mentioned above. We then derive a weak version of a dynamic programming principle (DPP) for the corresponding value function; in this direction our contribution compared to the existing literature is to develop techniques that admit discontinuous payoff functions. Moreover, based on our DPP, we provide an alternative characterization of the value function as a solution of a partial differential equation (PDE) in the sense of discontinuous viscosity solutions, along with boundary conditions both in Dirichlet and viscosity senses. Theoretical justifications are also discussed to pave the way for deployment of off-the-shelf PDE solvers for numerical computations. Finally, we validate the performance of the proposed framework on the stochastic Zermelo navigation problem.
Chapter
The chapter summarizes the recently developed framework of linearly solvable stochastic optimal control. Using an exponential transformation, the (Hamilton-Jacobi) Bellman equation for such problems can be made linear, giving rise to efficient numerical methods. Extensions to game theory are also possible and lead to linear Isaacs equations. The key restriction that makes a stochastic optimal control problem linearly solvable is that the noise and the controls must act in the same subspace. The chapter focuses on discrete-time problems (i.e., Linearly Solvable Markov Decision Processes (LMDPs)), and summarizes related results in continuous time. It briefly introduces the notion of game theoretic control or robust control. The chapter provides a unified treatment of the developments in linearly solvable optimal control.Controlled Vocabulary Termscontrol systems; numerical analysis; optimal control; stochastic processes
Article
The main results of this paper are two-fold. The first, Theorem 1, is a generalization of the work of Chow and others concerning the set of locally accessible points of a nonlinear control system. It is shown that under quite general conditions, this set lies on a surface in state space and has a nonemptyinterior in the relative topology of that surface. The second result, Theorem 3, generalizes the bang-bang theorem to nonlinear control systems using higher order control variations as developed by Kelley and others. As a corollary we obtain Halkin’s bang-bang theorem for a linear piecewise analytic control system.
Article
The dynamical behavior of many systems arising in physics, chemistry, biology, etc. is dominated by rare but important transition events between long lived states. For over 70 years, transition state theory (TST) has provided the main theoretical framework for the description of these events [17,33,34]. Yet, while TST and evolutions thereof based on the reactive flux formalism [1, 5] (see also [30,31]) give an accurate estimate of the transition rate of a reaction, at least in principle, the theory tells very little in terms of the mechanism of this reaction. Recent advances, such as transition path sampling (TPS) of Bolhuis, Chandler, Dellago, and Geissler [3, 7] or the action method of Elber [15, 16], may seem to go beyond TST in that respect: these techniques allow indeed to sample the ensemble of reactive trajectories, i.e. the trajectories by which the reaction occurs. And yet, the reactive trajectories may again be rather uninformative about the mechanism of the reaction. This may sound paradoxical at first: what more than actual reactive trajectories could one need to understand a reaction? The problem, however, is that the reactive trajectories by themselves give only a very indirect information about the statistical properties of these trajectories. This is similar to why statistical mechanics is not simply a footnote in books about classical mechanics. What is the probability density that a trajectory be at a given location in state-space conditional on it being reactive? What is the probability current of these reactive trajectories? What is their rate of appearance? These are the questions of interest and they are not easy to answer directly from the ensemble of reactive trajectories. The right framework to tackle these questions also goes beyond standard equilibrium statistical mechanics because of the nontrivial bias that the very definition of the reactive trajectories imply – they must be involved in a reaction. The aim of this chapter is to introduce the reader to the probabilistic framework one can use to characterize the mechanism of a reaction and obtain the probability density, current, rate, etc. of the reactive trajectories.
Article
A numerical scheme for solving high-dimensional stochastic control problems on an infinite time horizon that appear relevant in the context of molecular dynamics is outlined. The scheme rests on the interpretation of the corresponding Hamilton–Jacobi–Bellman equation as a nonlinear eigenvalue problem that, using a logarithmic transformation, can be recast as a linear eigenvalue problem, for which the principal eigenvalue and its eigenfunction are sought. The latter can be computed efficiently by approximating the underlying stochastic process with a coarse-grained Markov state model for the dominant metastable sets. We illustrate our method with two numerical examples, one of which involves the task of maximizing the population of α-helices in an ensemble of small biomolecules (alanine dipeptide), and discuss the relation to the large deviation principle of Donsker and Varadhan.
Article
The first part of the paper is concerned with a version of Freidlin-Wentzell exit theorems suitable for control theoretic applications. In the second part the general results are specialized to the case of linear systems and two types of stabilizability questions are considered.
Article
The ergodic properties of SDEs, and various time discretizations for SDEs, are studied. The ergodicity of SDEs is established by using techniques from the theory of Markov chains on general state spaces, such as that expounded by Meyn-Tweedie. Application of these Markov chain results leads to straightforward proofs of geometric ergodicity for a variety of SDEs, including problems with degenerate noise and for problems with locally Lipschitz vector fields. Applications where this theory can be usefully applied include damped-driven Hamiltonian problems (the Langevin equation), the Lorenz equation with degenerate noise and gradient systems. The same Markov chain theory is then used to study time-discrete approximations of these SDEs. The two primary ingredients for ergodicity are a minorization condition and a Lyapunov condition. It is shown that the minorization condition is robust under approximation. For globally Lipschitz vector fields this is also true of the Lyapunov condition. However in the locally Lipschitz case the Lyapunov condition fails for explicit methods such as Euler-Maruyama; for pathwise approximations it is, in general, only inherited by specially constructed implicit discretizations. Examples of such discretization based on backward Euler methods are given, and approximation of the Langevin equation studied in some detail.
Article
This paper is concerned with Markov diffusion processes which obey stochastic differential equations depending on a small parameter. The parameter enters as a coefficient in the noise term of the stochastic differential equation. The Ventcel-Freidlin estimates give asymptotic formulas (as0) for such quantities as the probability of exit from a regionD through a given portionN of the boundary D, the mean exit time, and the probability of exit by a given timeT. A new method to obtain such estimates is given, using ideas from stochastic control theory.
Conference Paper
We study balanced truncation for stochastic differential equations. In doing so, we adopt ideas from large deviations theory and discuss notions of controllability and observability for dissipative Hamiltonian systems with degenerate noise term, also known as Langevin equations. For partially-observed Langevin equations, we illustrate model reduction by balanced truncation with an example from molecular dynamics and discuss aspects of structure-preservation.
Article
We present a method of balancing for nonlinear systems which is an extension of balancing for linear systems in the sense that it is basd on the input and output energy of a system. It is a local result, but gives ‘broader’ results than we obtain by just linearizing the system. Furthermore, the relation with balancing of the linearization is dealt with. We propose to use the method as a tool for nonlinear model reduction and investigate some of the properties of the reduced system.
Article
Thesis (Ph. D.)--University of Maryland, College Park, 1999. Thesis research directed by the Dept. of Electrical and Computer Engineering. Includes bibliographical references (leaves 370-383).
Article
An algorithm is presented to compute time scales of complex processes following predetermined milestones along a reaction coordinate. A non-Markovian hopping mechanism is assumed and constructed from underlying microscopic dynamics. General analytical analysis, a pedagogical example, and numerical solutions of the non-Markovian model are presented. No assumption is made in the theoretical derivation on the type of microscopic dynamics along the reaction coordinate. However, the detailed calculations are for Brownian dynamics in which the velocities are uncorrelated in time (but spatial memory remains).
Article
Kalman's minimal realization theory involves geometric objects (controllable, unobservable subspaces) which are subject to structural instability. Specifically, arbitrarily small perturbations in a model may cause a change in the dimensions of the associated subspaces. This situation is manifested in computational difficulties which arise in attempts to apply textbook algorithms for computing a minimal realization. Structural instability associated with geometric theories is not unique to control; it arises in the theory of linear equations as well. In this setting, the computational problems have been studied for decades and excellent tools have been developed for coping with the situation. One of the main goals of this paper is to call attention to principal component analysis (Hotelling, 1933), and an algorithm (Golub and Reinsch, 1970) for computing the singular value decompositon of a matrix. Together they form a powerful tool for coping with structural instability in dynamic systems. As developed in this paper, principal component analysis is a technique for analyzing signals. (Singular value decomposition provides the computational machinery.) For this reason, Kalman's minimal realization theory is recast in terms of responses to injected signals. Application of the signal analysis to controllability and observability leads to a coordinate system in which the "internally balanced" model has special properties. For asymptotically stable systems, this yields working approximations of X_{c}, X_{bar{o}} , the controllable and unobservable subspaces. It is proposed that a natural first step in model reduction is to apply the mechanics of minimal realization using these working subspaces.
Article
In this paper, we introduce a new method of model reduction for nonlinear control systems. Our approach is to construct an approximately balanced realization. The method requires only standard matrix computations, and we show that when it is applied to linear systems it results in the usual balanced truncation. For nonlinear systems, the method makes use of data from either simulation or experiment to identify the dynamics relevant to the input}output map of the system. An important feature of this approach is that the resulting reduced-order model is nonlinear, and has inputs and outputs suitable for control. We perform an example reduction for a nonlinear mechanical system.
Balanced averaging of bilinear systems with applications to stochastic control
  • Hartmann