Content uploaded by Dirk Aeyels

Author content

All content in this area was uploaded by Dirk Aeyels

Content may be subject to copyright.

A dynamical system consists of a smooth vectorfield defined on a differentiable manifold, and a smooth mapping from the manifold to the real numbers. The vectorfield represents the dynamics of a physical system. The mapping stands for a measuring device by which experimental information on the dynamics is made available. The information itself is modeled as a sampled version of the image of the state trajectory under the smooth mapping. In this paper the observability of this set-up is discussed from the viewpoint of genericity. First the observability property is expressed in terms of transversality conditions. Then the theory of transversal intersection is called upon to yield the desired results. It is shown that almost any measuring device will combine with a given physical system to form an observable dynamical system, if (2n plus 1) samples are taken and not fewer, where n is the dimension of the manifold.

Content uploaded by Dirk Aeyels

Author content

All content in this area was uploaded by Dirk Aeyels

Content may be subject to copyright.

... We remark that in the case of the USP, the map h : ← − U → X U acts as an observable or a probe in the sense of an observable introduced in the original idea of delay embedding in [16,6]. The idea of embedding a time-series originating from an attractor of dimension d of a differentiable dynamical system in the Euclidean space R 2d+1 through a generic delay-observable first appeared in [16], and the idea of conjugacy in addition to the embedding led to the celebrated Takens embedding theorem [6]. ...

... We remark that in the case of the USP, the map h : ← − U → X U acts as an observable or a probe in the sense of an observable introduced in the original idea of delay embedding in [16,6]. The idea of embedding a time-series originating from an attractor of dimension d of a differentiable dynamical system in the Euclidean space R 2d+1 through a generic delay-observable first appeared in [16], and the idea of conjugacy in addition to the embedding led to the celebrated Takens embedding theorem [6]. The idea of causal embedding is different from the Takens' embedding theorem [6] in at least three different ways: (i). the input could be arbitrary and does not necessarily have to originate from another dynamical system. ...

... Proof of (i). Let g have the USP, and let H be as in (16). It is clear that H is continuous since h : ← − U → ← − X U by Lemma 5 and since the right-shift map r : ...

The celebrated Takens' embedding theorem concerns embedding an attractor of a dynamical system in a Euclidean space of appropriate dimension through a generic delay-observation map. The embedding also establishes a topological conjugacy. In this paper, we show how an arbitrary sequence can be mapped into another space as an attractive solution of a nonautonomous dynamical system. Such mapping also entails a topological conjugacy and an embedding between the sequence and the attractive solution spaces. This result is not a generalization of Takens embedding theorem but helps us understand what exactly is required by discrete-time state space models widely used in applications to embed an external stimulus onto its solution space. Our results settle another basic problem concerning the perturbation of an autonomous dynamical system. We describe what exactly happens to the dynamics when exogenous noise perturbs continuously a local irreducible attracting set (such as a stable fixed point) of a discrete-time autonomous dynamical system.

... • The second setting conforms to what one can actually do in a current clamp experiment, namely observe only the membrane voltage V(t) given the stimulating current I stim (t). This requires us to add to the basic DDF formulation the idea of constructing enlarged state spaces from the observed variables and their time delays [37,3,4,1,20]. This method is familiar and essential in the study of nonlinear dynamics and will be explained in the present context. ...

... What we observe is the operation of the full dynamics projected down to the single dimension V(t). To proceed we must effectively 'unproject' the dynamics back to a 'proxy space', comprised of the voltage and its time delays [37,3,4,1,20], which is equivalent to the original state space of V(t) and the gating variables for the ion channels. This is accomplished as follows: If we have observed V(t), we can define D E -dimensional ('unprojected') proxy space vectors S(t n ) via time delays of 13 . ...

Using methods from nonlinear dynamics and interpolation techniques from applied mathematics, we show how to use data alone to construct discrete time dynamical rules that forecast observed neuron properties. These data may come from from simulations of a Hodgkin-Huxley (HH) neuron model or from laboratory current clamp experiments. In each case the reduced dimension data driven forecasting (DDF) models are shown to predict accurately for times after the training period.
When the available observations for neuron preparations are, for example, membrane voltage V(t) only, we use the technique of time delay embedding from nonlinear dynamics to generate an appropriate space in which the full dynamics can be realized.
The DDF constructions are reduced dimension models relative to HH models as they are built on and forecast only observables such as V(t). They do not require detailed specification of ion channels, their gating variables, and the many parameters that accompany an HH model for laboratory measurements, yet all of this important information is encoded in the DDF model.
As the DDF models use only voltage data and forecast only voltage data they can be used in building networks with biophysical connections. Both gap junction connections and ligand gated synaptic connections among neurons involve presynaptic voltages and induce postsynaptic voltage response. Biophysically based DDF neuron models can replace other reduced dimension neuron models, say of the integrate-and-fire type, in developing and analyzing large networks of neurons.
When one does have detailed HH model neurons for network components, a reduced dimension DDF realization of the HH voltage dynamics may be used in network computations to achieve computational efficiency and the exploration of larger biological networks.

... In particular, our approach is to combine classical results on existence and uniqueness of solutions of ODEs and their differentiable dependence on parameters [7,8] with basic results from singularity theory on invertibility of nonlinear maps [9,10] to provide necessary and/or sufficient conditions on the data that provide desired qualitative charactristics of the inverse problem. The emphasis on conditions on the data distinguishes our work from related work on identifiability, observability, and controlability, which has focused on conditions satisfied by the parameters of the system (see, e.g., [11][12][13][14][15]). ...

Our recent work lays out a general framework for inferring information about the parameters and associated dynamics of a differential equation model from a discrete set of data points collected from the system being modeled. Rigorous mathematical results have justified this approach and have identified some common features that arise for certain classes of integrable models. In this work we present a thorough numerical investigation that shows that several of these core features extend to a paradigmatic linear-in-parameters model, the Lotka-Volterra (LV) system, which we consider in the conservative case as well as under the addition of terms that perturb the system away from this regime. A central construct for this analysis is a concise representation of parameter features in the data space that we call the $P_n$-diagram, which is particularly useful for visualization of results for low-dimensional (small $n$) systems. Our work also exposes some new properties related to non-uniqueness that arise for these LV systems, with non-uniqueness manifesting as a multi-layered structure in the associated $P_2$-diagrams.

... The idea here is that, if only observations of a few (or even only one) variables involved are available, one can use history of these observations ("time-delay" measurements) to create a useful latent space in which to embed the dynamics -and in which, therefore, to learn a surrogate model with less variables, but involving also history of these variables [26,27]. There are important assumptions here: finite (even low) dimensionality of the long-term dynamics, something easier to contemplate for ODEs, but possible for PDEs with attracting, low dimensional, (possibly inertial) manifolds for their long-term dynamics. ...

E. coli chemotactic motion in the presence of a chemoattractant field has been extensively studied using wet laboratory experiments, stochastic computational models as well as partial differential equation-based models (PDEs). The most challenging step in bridging these approaches, is establishing a closed form of the so-called chemotactic term, which describes how bacteria bias their motion up chemonutrient concentration gradients, as a result of a cascade of biochemical processes. Data-driven models can be used to learn the entire evolution operator of the chemotactic PDEs (black box models), or, in a more targeted fashion, to learn just the chemotactic term (gray box models). In this work, data-driven Machine Learning approaches for learning the underlying model PDEs are (a) validated through the use of simulation data from established continuum models and (b) used to infer chemotactic PDEs from experimental data. Even when the data at hand are sparse (coarse in space and/or time), noisy (due to inherent stochasticity in measurements) or partial (e.g. lack of measurements of the associated chemoattractant field), we can attempt to learn the right-hand-side of a closed PDE for an evolving bacterial density. In fact we show that data-driven PDEs including a short history of the bacterial density field (e.g. in the form of higher-order in time PDEs in terms of the measurable bacterial density) can be successful in predicting further bacterial density evolution, and even possibly recovering estimates of the unmeasured chemonutrient field. The main tool in this effort is the effective low-dimensionality of the dynamics (in the spirit of the Whitney and Takens embedding theorems). The resulting data-driven PDE can then be simulated to reproduce/predict computational or experimental bacterial density profile data, and estimate the underlying (unmeasured) chemonutrient field evolution.

... Takens' theorem prescribes that all information needed to constrain the model is contained in an observation window of finite duration [52]. Aeyels [53] refined this argument by predicting that the number of observations be no less than 2L + 1, where L is the number of state variables in the system. ...

Model optimization in neuroscience has focused on inferring intracellular parameters from time series observations of the membrane voltage and calcium concentrations. These parameters constitute the fingerprints of ion channel subtypes and may identify ion channel mutations from observed changes in electrical activity. A central question in neuroscience is whether computational methods may obtain ion channel parameters with sufficient consistency and accuracy to provide new information on the underlying biology. Finding single-valued solutions in particular, remains an outstanding theoretical challenge. This note reviews recent progress in the field. It first covers well-posed problems and describes the conditions that the model and data need to meet to warrant the recovery of all the original parameters—even in the presence of noise. The main challenge is model error, which reflects our lack of knowledge of exact equations. We report on strategies that have been partially successful at inferring the parameters of rodent and songbird neurons, when model error is sufficiently small for accurate predictions to be made irrespective of stimulation.

... Together with the results from Packard et al. [23] and Aeyels [24], the definitions and theorems of Takens [21] describe the concept of observability of state spaces of nonlinear dynamical systems. A dynamical system is defined through its state space (here, the manifold M) and a diffeomorphism φ : M → M. Theorem 1. Generic delay embeddings For pairs (φ, y), φ : M → M a smooth diffeomorphism and y : M → R a smooth function, it is a generic property that the map Φ (φ,y) : M → R 2d+1 , defined by ...

Quantum process tomography conventionally uses a multitude of initial quantum states and then performs state tomography on the process output. Here we propose and study an alternative approach which requires only a single (or few) known initial states together with time-delayed measurements for reconstructing the unitary map and corresponding Hamiltonian of the time dynamics. The overarching mathematical framework and feasibility guarantee of our method is provided by the Takens embedding theorem. We explain in detail how the reconstruction of a single qubit Hamiltonian works in this setting, and provide numerical methods and experiments for general few-qubit and lattice systems with local interactions. In particular, the method allows to find the Hamiltonian of a two qubit system by observing only one of the qubits.

Most of the existing methods and techniques for the detection of chaotic behaviour from empirical time series try to quantify the well-known sensitivity to initial conditions through the estimation of the so-called Lyapunov exponents corresponding to the data generating system, even if this system is unknown. Some of these methods are designed to operate in noise-free environments, such as those methods that directly quantify the separation rate of two initially close trajectories. As an alternative, this paper provides two nonlinear indirect regression methods for estimating the Lyapunov exponents on a noisy environment. We extend the global Jacobian method, by using local polynomial kernel regressions and local neural net kernel models. We apply such methods to several noise-contaminated time series coming from different data generating processes. The results show that in general, the Jacobian indirect methods provide better results than the traditional direct methods for both clean and noisy time series. Moreover, the local Jacobian indirect methods provide more robust and accurate fit than the global ones, with the methods using local networks obtaining more accurate results than those using local polynomials.

Using methods from nonlinear dynamics and interpolation techniques from applied mathematics, we show how to use data alone to construct discrete time dynamical rules that forecast observed neuron properties. These data may come from simulations of a Hodgkin-Huxley (HH) neuron model or from laboratory current clamp experiments. In each case, the reduced-dimension, data-driven forecasting (DDF) models are shown to predict accurately for times after the training period.
When the available observations for neuron preparations are, for example, membrane voltage V(t) only, we use the technique of time delay embedding from nonlinear dynamics to generate an appropriate space in which the full dynamics can be realized.
The DDF constructions are reduced-dimension models relative to HH models as they are built on and forecast only observables such as V(t). They do not require detailed specification of ion channels, their gating variables, and the many parameters that accompany an HH model for laboratory measurements, yet all of this important information is encoded in the DDF model. As the DDF models use and forecast only voltage data, they can be used in building networks with biophysical connections. Both gap junction connections and ligand gated synaptic connections among neurons involve presynaptic voltages and induce postsynaptic voltage response. Biophysically based DDF neuron models can replace other reduced-dimension neuron models, say, of the integrate-and-fire type, in developing and analyzing large networks of neurons.
When one does have detailed HH model neurons for network components, a reduced-dimension DDF realization of the HH voltage dynamics may be used in network computations to achieve computational efficiency and the exploration of larger biological networks.

The main objective of this work is to design a virtual sensor capable of estimating variables that are unmeasurable on-line in the air and charging subsystem of a Diesel engine. In order to achieve this objective, a data-driven approach is pursued. In particular, we show that combining high-gain observers and feed-forward neural networks, it is possible to design an observer for the air and charging system of a Diesel engine on the basis of data acquired via a test bench. The performance of this observer is evaluated in a real experimental setting.

Different notions of observability are compared for systems defined by polynomial difference equations. The main result states that, for systems having the standard property of (multiple-experiment initial-state) observability, the response to a generic input sequence is sufficient for final-state determination. Some remarks are made on results for nonpolynomial and/or continuous-time systems. An identifiability result is derived from the above.

It is shown that, for observable continuous time systems Whose dynamics and output are given by polynomial functions, the observation of the output that corresponds to a single input u is sufficient to determine the initial state, provided that u is suitably chosen. The "good" u's are an open dense subset of the set of all infinitely differentiable inputs. In particular, one can choose u to be a polynomial. Moreover, if the degree N is sufficiently large, then the "good" polynomial inputs of degree not greater than N form an open dense subset W of the set of all polynomials of degree not greater than N. The set W is semialgebraic, i.e., describable by finitely many polynomial inequalities. Similar results are proved for parameter identification.

This complete and authoritative presentation of the current status of control theory offers a useful foundation for both study and research. With emphasis on general nonlinear differential systems, the book is carefully and systematically developed from elementary motivating examples, through the most comprehensive theory, to the final numerical solution of serious scientific and engineering control problems. The book features reviews of the most recent researches on processes described by partial differential equations, functional- differential, and delay-differential equations; the most recent treatment of impulse controllers, bounded rate controllers, feedback controllers, and bounded phase problems; and many unpublished new research results of the authors. In addition to an exhaustive treatment of the quantitative problems of optimal control, the qualitative concepts of stability, controllability, observability, and plant recognition receive a complete exposition. (Author)

A mechanism for the generation of turbulence and related phenomena in dissipative systems is proposed.