About
111
Publications
18,363
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,197
Citations
Introduction
Current institution
Education
July 1997 - June 2001
Publications
Publications (111)
We present a unified theoretical and computational framework for constructing reproducing kernels tailored to transport equations and adapted to Koop-man eigenfunctions of nonlinear dynamical systems. These eigenfunctions satisfy a transport-type partial differential equation (PDE) that we invert using three analytically grounded methods: (i) A Lio...
Machine Learning (ML) and Algorithmic Information Theory (AIT) offer distinct yet complementary approaches to understanding and addressing complexity. This paper investigates the synergy between these disciplines in two directions: AIT for Kernel Methods and Kernel Methods for AIT. In the former, we explore how AIT concepts inspire the design of ke...
Making accurate inferences about data is a key task in science and mathematics. Here we study the problem of retrodiction, inferring past values of a series, in the context of chaotic dynamical systems. Specifically, we are interested in inferring the starting value x0 in the series x0, x1, x2,. .. , xn given the value of xn, and the associated fun...
Methods have previously been developed for the approximation of Lyapunov functions using radial basis functions. We consider the problem of data-based approximation of a given Lyapunov function using the principal eigenfunctions of the Koopman operator without explicitly computing the operator itself. We demonstrate the effectiveness of our algorit...
Background/Objectives: The research addresses algorithmic bias in deep learning models for cardiovascular risk prediction, focusing on fairness across demographic and socioeconomic groups to mitigate health disparities. It integrates fairness-aware algorithms, susceptible carrier-infected-recovered (SCIR) models, and interpretability frameworks to...
Machine Learning (ML) and Algorithmic Information Theory (AIT) address complexity from distinct yet complementary perspectives. This paper explores the synergy between AIT and ML, specifically examining how kernel method can effectively bridge these disciplines. By applying these methods to the problems of Clustering and Density Estimation-fundamen...
In this paper we use Gaussian processes (kernel methods) to learn mappings between trajectories of distinct differential equations. Our goal is to simplify both the representation and the solution of these equations. We begin by examining the Cole-Hopf transformation, a classical result that converts the nonlinear, viscous Burgers' equation into th...
This paper examines the application of the Kernel Sum of Squares (KSOS) method for enhancing kernel learning from data, particularly in the context of dynamical systems. Traditional kernel-based methods, despite their theoretical soundness and numerical efficiency, frequently struggle with selecting optimal base kernels and parameter tuning, especi...
Our research evaluates advanced artificial (AI) methodologies to enhance diagnostic accuracy in pulmonary radiography. Utilizing DenseNet121 and ResNet50, we analyzed 108,948 chest X-ray images from 32,717 patients and DenseNet121 achieved an area under the curve (AUC) of 94% in identifying the conditions of pneumothorax and oedema. The model’s per...
Cardiovascular diseases (CVDs) remain a major global health challenge and a leading cause of mortality, highlighting the need for improved predictive models. We introduce an innovative agent-based dynamic simulation technique that enhances our AI models’ capacity to predict CVD progression. This method simulates individual patient responses to vari...
Arguments inspired by algorithmic information theory predict an inverse relation between the probability and complexity of output patterns in a wide range of input–output maps. This phenomenon is known as simplicity bias. By viewing the parameters of dynamical systems as inputs, and the resulting (digitised) trajectories as outputs, we study simpli...
Early pregnancy loss (EPL) is a prevalent health concern with significant implications globally for gestational health. This research leverages machine learning to enhance the prediction of EPL and to differentiate between typical pregnancies and those at elevated risk during the initial trimester. We employed different machine learning methodologi...
Machine Learning (ML) and Algorithmic Information Theory (AIT) look at Complexity from different points of view. We explore the interface between AIT and Kernel Methods (that are prevalent in ML) by adopting an AIT perspective on the problem of learning kernels from data, in kernel ridge regression, through the method of Sparse Kernel Flows. In par...
Using short histories of observations from a dynamical system, a mechanism for the post-training initialization of reservoir computing systems is described. This strategy is called cold-starting, and it is based on a map called the starting map, which is determined by an appropriately short history of observations that maps to a unique initial cond...
Simplicity bias is an intriguing phenomenon prevalent in various input-output maps, characterized by a preference for simpler, more regular, or symmetric outputs. Notably, these maps typically feature high-probability outputs with simple patterns, whereas complex patterns are exponentially less probable. This bias has been extensively examined and...
Arguments inspired by algorithmic information theory predict an inverse relation between the probability and complexity of output patterns in a wide range of input-output maps. This phenomenon is known as simplicity bias. By viewing the parameters of dynamical systems as inputs, and resulting (digitised) trajectories as outputs, we study simplicity...
Machine Learning (ML) and Algorithmic Information Theory (AIT) look at Complexity from different points of view. We explore the interface between AIT and Kernel Methods (that are prevalent in ML) by adopting an AIT perspective on the problem of learning kernels from data through the method of Sparse Kernel Flows introduced in [YSH + 22]. We prove,...
In this paper, we deal with hypernormal forms of non-resonant double Hopf singularities. We investigate the infinite level normal form classification of such singularities with nonzero radial cubic part. We provide a normal form decomposition of normal form vector fields in terms of planar-rotating and planar-radial vector fields. These facilitate...
In this paper, we explore hypernormal forms of vector fields that have non-resonant double Hopf singularities with a non-zero radial cubic part. Our primary focus is on investigating the infinite-level normal form classification of this type of singularities. We provide a normal form decomposition in terms of planar-rotating and planar-radial vecto...
Regressing the vector field of a dynamical system from a finite number of observed states is a natural way to learn surrogate models for such systems. As shown in [27, 15, 36, 16, 39, 29, 48], a simple and interpretable way to learn a dynamical system from data is to interpolate its vector-field with a data-adapted kernel which can be learned by us...
We consider the problem of learning Stochastic Differential Equations of the form dXt=f(Xt)dt+σ(Xt)dWt from one sample trajectory. This problem is more challenging than learning deterministic dynamical systems because one sample trajectory only provides indirect information on the unknown functions f, σ, and stochastic process dWt representing the...
To what extent can we forecast a time series without fitting to historical data? Can universal patterns of probability help in this task? Deep relations between pattern Kolmogorov complexity and pattern probability have recently been used to make a priori probability predictions in a variety of systems in physics, biology and engineering. Here we s...
A simple and interpretable way to learn a dynamical system from data is to interpolate its vector-field with a kernel. In particular, this strategy is highly efficient (both in terms of accuracy and complexity) when the kernel is data-adapted using Kernel Flows (KF) [OY19] (which uses gradient-based optimization to learn a kernel based on the premi...
Training a residual neural network with L2 regularization on weights and biases is equivalent to minimizing a discrete least action principle and to controlling a discrete Hamiltonian system representing the propagation of input data across layers. The kernel/feature map analysis of this Hamiltonian system suggests a mean-field limit for trained we...
This technical note presents an application of kernel model decomposition (KMD) for detecting critical transitions in some fast-slow random dynamical systems. The approach rests upon using KMD for reconstructing an observable with a novel data-based time-frequency-phase kernel that allows to approximate signals with critical transitions. In particu...
To what extent can we forecast a time series without fitting to historical data? Can universal patterns of probability help in this task? Deep relations between pattern Kolmogorov complexity and pattern probability have recently been used to make a priori probability predictions in a variety of systems in physics, biology and engineering. Here we s...
In previous work, we showed that learning dynamical system [21] with kernel methods can achieve state of the art, both in terms of accuracy and complexity, for predicting climate/weather time series [20], when the kernel is also learned from data. While the kernels considered in previous work were parametric, in this follow-up paper, we test a non-...
A simple and interpretable way to learn a dynamical system from data is to interpolate its vector-field with a kernel. In particular, this strategy is highly efficient (both in terms of accuracy and complexity) when the kernel is data-adapted using Kernel Flows (KF) [34] (which uses gradient-based optimization to learn a kernel based on the premise...
A simple and interpretable way to learn a dynamical system from data is to interpolate its vector-field with a kernel. In particular, this strategy is highly efficient (both in terms of accuracy and complexity) when the kernel is data-adapted using Kernel Flows (KF) [34] (which uses gradient-based optimization to learn a kernel based on the premise...
A simple and interpretable way to learn a dynamical system from data is to interpolate its vector-field with a kernel. In particular, this strategy is highly efficient (both in terms of accuracy and complexity) when the kernel is data-adapted using Kernel Flows (KF)\cite{Owhadi19} (which uses gradient-based optimization to learn a kernel based on t...
Modelling geophysical processes as low-dimensional dynamical systems and regressing their vector field from data is a promising approach for learning emulators of such systems. We show that when the kernel of these emulators is also learned from data (using kernel flows, a variant of cross-validation), then the resulting data-driven models are not...
For dynamical systems with a non hyperbolic equilibrium, it is possible to significantly simplify the study of stability by means of the center manifold theory. This theory allows to isolate the complicated asymptotic behavior of the system close to the equilibrium point and to obtain meaningful predictions of its behavior by analyzing a reduced or...
Modeling geophysical systems as dynamical systems and regressing their vector field from data is a simple way to learn emulators for such systems. We show that when the kernel of these emulators is also learned from data (using kernel flows, a variant of cross-validation), then the resulting data-driven models are not only faster than equation-base...
Modeling geophysical systems as dynamical systems and regressing their vector field from data is a simple way to learn emulators for such systems. We show that when the kernel of these emulators is also learned from data (using kernel flows, a variant of cross-validation), then the resulting data-driven models are not only faster than equation-base...
We present a novel kernel-based machine learning algorithm for identifying the low-dimensional geometry of the effective dynamics of high-dimensional multiscale stochastic systems. Recently, the authors developed a mathematical framework for the computation of optimal reaction coordinates of such systems that is based on learning a parameterization...
Regressing the vector field of a dynamical system from a finite number of observed states is a natural way to learn surrogate models for such systems. We present variants of cross-validation (Kernel Flows (Owhadi and Yoo, 2019) and its variants based on Maximum Mean Discrepancy and Lyapunov exponents) as simple approaches for learning the kernel us...
For dynamical systems with a non hyperbolic equilibrium, it is possible to significantly simplify the study of stability by means of the center manifold theory. This theory allows to isolate the complicated asymptotic behavior of the system close to the equilibrium point and to obtain meaningful predictions of its behavior by analyzing a reduced or...
For deterministic continuous-time nonlinear control systems, epsilon-practical stabilization entropy and practical stabilization entropy are introduced. Here the rate of attraction is specified by a KL-function. Upper and lower bounds for the diverse entropies are proved, with special attention to exponential KL-functions. Two scalar examples are a...
For certain dynamical systems it is possible to significantly simplify the study of stability by means of the center manifold theory. This theory allows to isolate the complicated asymptotic behavior of the system close to a non-hyperbolic equilibrium point, and to obtain meaningful predictions of its behavior by analyzing a reduced dimensional pro...
Regressing the vector field of a dynamical system from a finite number of observed states is a natural way to learn surrogate models for such systems. We present variants of cross-validation (Kernel Flows [31] and its variants based on Maximum Mean Discrepancy and Lyapunov exponents) as simple approaches for learning the kernel used in these emulat...
Regressing the vector field of a dynamical system from a finite number of observed states is a natural way to learn surrogate models for such systems. We present variants of cross-validation (Kernel Flows [31] and its variants based on Maximum Mean Discrepancy and Lyapunov exponents) as simple approaches for learning the kernel used in these emulat...
Many dimensionality and model reduction techniques rely on estimating dominant eigenfunctions of associated dynamical operators from data. Important examples include the Koopman operator and its generator, but also the Schrödinger operator. We propose a kernel-based method for the approximation of differential operators in reproducing kernel Hilber...
Many dimensionality and model reduction techniques rely on estimating dominant eigenfunctions of associated dynamical operators from data. Important examples include the Koopman operator and its generator, but also the Schr\"odinger operator. We propose a kernel-based method for the approximation of differential operators in reproducing kernel Hilb...
The paper studies an extension to nonlinear systems of a recently proposed approach to the definition of modal participation factors. A definition is given for local mode-in-state participation factors for smooth nonlinear autonomous systems. While the definition is general, the resulting measures depend on the assumed uncertainty law governing the...
Methods from learning theory are used in the state space of linear dynamical and control systems in order to estimate relevant matrices and some relevant quantities such as the topological entropy. An application to stabilization via algebraic Riccati equations is included by viewing a control system as an autonomous system in an extended space of...
Methods from learning theory are used in the state space of linear dynamical systems in order to estimate the system matrices and some relevant quantities such as a the topological entropy. The approach is illustrated via a series of numerical examples.
We present a novel kernel-based machine learning algorithm for identifying the low-dimensional geometry of the effective dynamics of high-dimensional multiscale stochastic systems. Recently, the authors developed a mathematical framework for the computation of optimal reaction coordinates of such systems that is based on learning a parametrization...
We study the maximum mean discrepancy (MMD) in the context of critical transitions modelled by fast‐slow stochastic dynamical systems. We establish a new link between the dynamical theory of critical transitions with the statistical aspects of the MMD. In particular, we show that a formal approximation of the MMD near fast subsystem bifurcation poi...
The paper studies an extension to nonlinear systems of a recently proposed approach to the concept of modal participation factors. First, a definition is given for local mode-in-state participation factors for smooth nonlinear autonomous systems. The definition is general, and, unlike in the more traditional approach, the resulting participation me...
For certain dynamical systems it is possible to significantly simplify the study of stability by means of the center manifold theory. This theory allows to isolate the complicated asymptotic behavior of the system close to a non-hyperbolic equilibrium point, and to obtain meaningful predictions of its behavior by analyzing a reduced dimensional pro...
For certain dynamical systems it is possible to significantly simplify the study of stability by means of the center manifold theory. This theory allows to isolate the complicated asymptotic behavior of the system close to a non-hyperbolic equilibrium point, and to obtain meaningful predictions of its behavior by analyzing a reduced dimensional pro...
We study the maximum mean discrepancy (MMD) in the context of critical transitions modelled by fast-slow stochastic dynamical systems. We establish a new link between the dynamical theory of critical transitions with the statistical aspects of the MMD. In particular , we show that a formal approximation of the MMD near fast subsystem bifurcation po...
We study the maximum mean discrepancy (MMD) in the context of critical transitions modelled by fast-slow stochastic dynamical systems. We establish a new link between the dynamical theory of critical transitions with the statistical aspects of the MMD. In particular, we show that a formal approximation of the MMD near fast subsystem bifurcation poi...
In this paper we use the Maximum Mean Discrepancy (in Reproducing Kernel Hilbert Spaces) as a measure of heterogeneity between probability measures to detect seizures.
We introduce a data-driven order reduction method for nonlinear control systems, drawing on recent progress in machine learning and statistical dimensionality reduction. The method rests on the assumption that the nonlinear system behaves linearly when lifted into a high (or infinite) dimensional feature space where balanced truncation may be carri...
We introduce a data-based approach to estimating key quantities which arise in the study of nonlinear control systems and random nonlinear dynamical systems. Our approach hinges on the observation that much of the existing linear theory may be readily extended to nonlinear systems - with a reasonable expectation of success - once the nonlinear syst...
Methods have previously been developed for the approximation of Lyapunov
functions using radial basis functions. However these methods assume that the
evolution equations are known. We consider the problem of approximating a given
Lyapunov function using radial basis functions where the evolution equations
are not known, but we instead have sampled...
Methods from learning theory are used in the state space of linear dynamical
and control systems in order to estimate the system matrices. An application to
stabilization via algebraic Riccati equations is included. The approach is
illustrated via a series of numerical examples.
The study of the behavior of solutions of ODEs often benefits from deciding on a convenient choice of coordinates. This choice of coordinates may be used to "simplify" the functional expressions that appear in the vector field in order that the essential features of the flow of the ODE near a critical point become more evident. In the case of the a...
The study of the behavior of solutions of ODEs often benefits from deciding on a convenient choice of coordinates. This choice of coordinates may be used to "simplify" the functional expressions that appear in the vector field in order that the essential features of the flow of the ODE near a critical point become more evident. In the case of the a...
We introduce a data-based approach to estimating key quantities which arise
in the study of nonlinear control systems and random nonlinear dynamical
systems. Our approach hinges on the observation that much of the existing
linear theory may be readily extended to nonlinear systems - with a reasonable
expectation of success - once the nonlinear syst...
We introduce a data-driven order reduction method for nonlinear control
systems, drawing on recent progress in machine learning and statistical
dimensionality reduction. The method rests on the assumption that the nonlinear
system behaves linearly when lifted into a high (or infinite) dimensional
feature space where balanced truncation may be carri...
We introduce a novel data-driven order reduction method for nonlinear control
systems, drawing on recent progress in machine learning and statistical
dimensionality reduction. The method rests on the assumption that the nonlinear
system behaves linearly when lifted into a high (or infinite) dimensional
feature space where balanced truncation may be...
In this correspondence, we propose a methodology to stabilize systems with control bifurcations by introducing ldquothe controlled center systems.rdquo A controlled center system is a reduced-order controlled dynamics consisting of the linearly uncontrollable dynamics with the first variable of the linearly controllable dynamics as input. The contr...
In this paper, we introduce the Controlled center dynamics for nonlinear discrete time systems with uncontrollable linearization. This is a reduced order control system whose dimension is the number of uncontrollable modes and whose stabilizability properties determine the stabilizability properties of the full order system. After reducing the orde...
Nonlinear dynamical systems exhibit complicated performance around bifurcation points. As the parameter of a system is varied,
changes may occur in the qualitative structure of its solution around a point of bifurcation. In order to study dynamical
systems with bifurcations, the following methodology is adopted in the theory of dynamical systems. F...
For nonlinear control systems with uncontrollable linearization around an equilibrium, the local asymptotic stability of the linear controllable directions can be easily achieved by linear feedback. Therefore we expect that the stabilizability of the whole system should depend on a reduced order model whose stabilizability depends on the linearly u...
The center manifold theorem is a model reduction technique for determining the local asymptotic stability of an equilibrium of a dynamical system when its linear part is not hy-perbolic. The overall system is asymptotically stable if and only if the center manifold dynamics is asymptotically stable. This allows for a substantial reduction in the di...
Nonlinear parameterized dynamical systems exhibit complicated performance around bifurcation points. As the parameter of a system is varied, changes may occur in the qualitative structure of its solutions around an equilibrium point. Usually, this happens when some eigenvalues of the linearized system cross the imaginary axis as the parameter chang...
In this paper we introduce the “Controlled Center Dynamics” for nonlinear discrete time systems with control bifurcations. Then we use this approach to stabilize discrete-time systems with a transcontrollable bifurcation.
We study the feedback classification of discrete-time control systems whose linear approximation around an equilibrium is controllable. We provide a normal form for systems under investigation.
In this paper, we use a feedback to change the orientation and the shape of the center manifold of a system with uncontrollable linearization. This change directly affect the reduced dynamics on the center manifold, and hence change the stability properties of the original system.
In this paper, control systems with two uncontrollable modes on the imaginary axis are studied. The main contributions include the local orientation control of periodic solutions and center manifolds, the quadratic normal form of systems with two imaginary uncontrollable modes, the stabilization of the Hopf bifurcation by state feedback, and the qu...
For nonlinear control systems with uncontrollable linearization around an equilibrium, the local asymptotic stability of the linear controllable directions can be easily achieved by linear feedback. Therefore we expect that the stabilizability of the whole system should depend on a reduced order model whose stabilizability depends on the linearly u...
In this paper we provide a simple algorithm of feedback design for systems with uncontrollable linearization with only quadratic degeneracy, such as transcritical and saddle-node bifurcations. This approach avoids the computation of nonlinear normal forms. It is based only on quadratic invariants which can be determined directly from the quadratic...
In this paper we analyze the systems with zero-Hopf control bifurcation using normal forms and center manifold techniques. After classifying equilibrium sets and finding sufficient stabilizability conditions, we synthesize an asymptotically stabilizing quadratic feedback.
In this paper we study the stabilization of discrete-time controlled dynamics with periodic doubling bifurcation using quadratic normal forms, centre manifold techniques and quadratic feedbacks. The procedure for designing a quadratic controller is also proposed.
We study the feedback classification of discrete-time control systems whose linear approximation around an equilibrium is controllable. We provide a normal form for systems under investigation.
The stabilization of a discrete-time controlled dynamics with one complex uncontrollable mode was analyzed using quadratic normal forms, centre manifold techniques and quadratic feedbacks. The linear part of the feedback was designed to stabilize the controllable subsystem and the quadratic part was designed for modifying the manifold over which th...
Our objective in this paper is to give some results on inverse optimal designs in view of robustness to known=unknown, but ignored input dynamics. This problem comes from the presence of actuators or the wish for using simpliÿed models. Stabilizing control laws may not be robust to this type of uncertainties. By exploiting the robustness of optimal...
In this paper, we determine the normal forms for control systems with a double-zero bifurcation. Based on the normal forms, we find invariants, stabilizability conditions and synthesize a quadratic stabilizing controller.