On 14 January, we will have Boumediene Hamzi from Imperial College London, Caltech and the Alan Turing Institute.
Talk title: Machine Learning and Dynamical Systems meet in Reproducing Kernel Hilbert Spaces
Abstract:
Since its inception in the 19th century through the efforts of Poincaré and Lyapunov, the theory of dynamical systems has addressed the qualitative behavior of dynamical processes as understood from models. From this perspective, modeling in applications requires a detailed understanding of the mechanisms at play, leading to mathematical descriptions that approximate the observed reality-often expressed as systems of ordinary/partial, underdetermined (control), deterministic/stochastic differential or difference equations. While such models are very precise for many processes, for some of the most challenging applications of dynamical systems (such as climate dynamics, brain dynamics, biological systems, or financial markets), deriving faithful models is notably difficult.
On the other hand, machine learning is concerned with algorithms designed to accomplish a given task, whose performance improves with more data. Applications include computer vision, stock market analysis, speech recognition, recommender systems, and sentiment analysis. This data-driven paradigm is invaluable in settings where no explicit model is available but measurement data is abundant-an increasingly common situation in modern scientific and engineering problems. The intersection of dynamical systems and machine learning remains comparatively underexplored, and the objective of this talk is to show that working in reproducing kernel Hilbert spaces (RKHS) offers a powerful framework for a data-based theory of nonlinear dynamical systems.
In the first part of the talk, we introduce simple methods to learn surrogate models for complex systems. We present variants of Kernel Flows as practical approaches for learning the kernels that appear in the emulators we use in our work. We discuss parametric and nonparametric kernel flows for learning chaotic dynamical systems, as well as learning from irregularly sampled time series and from partial observations. We also introduce Sparse Kernel Flows and Hausdorff-metric–based Kernel Flows (HMKFs) and apply them to learn a benchmark library of 132 chaotic dynamical systems. We also illustrate these learned-kernel emulators on climate/geophysical forecasting tasks, showing accurate, low-cost predictions from data. We further extend Kernel Mode Decomposition to design kernels aimed at detecting critical transitions in fast–slow random dynamical systems, with applications to seizure detection.
We then turn to stability and optimal control. We introduce a data-based perspective on center manifolds, propose kernel methods for computing center manifolds, and outline a data-based version of the center manifold theorem. We also present kernel methods for computing Lyapunov functions. Finally, we describe recent progress on Hamilton–Jacobi–Bellman (HJB) equations and nonlinear optimal control using kernel-based LMI methods: the Hamilton–Jacobi inequality is rewritten via Schur complement arguments into a convex LMI/SDP formulation, while the value function (and, crucially, its gradient) is represented in an RKHS through kernel expansions. To avoid degenerate solutions and connect with classical optimal control, we impose a Riccati–Hessian consistency constraint at the equilibrium, yielding computationally tractable controllers with accompanying stability and suboptimality guarantees.
In the second part, we introduce a data-based approach to estimating key quantities that arise in the study of nonlinear autonomous, control, and random dynamical systems. Our approach hinges on the observation that much of the existing linear theory may be extended to nonlinear systems-often with a reasonable expectation of success-once the nonlinear dynamics are mapped into a high- or infinite-dimensional RKHS. We develop computable, nonparametric estimators approximating controllability and observability energies for nonlinear systems, apply them to model reduction of nonlinear control systems, and show how controllability-energy estimation provides a practical route to approximating the invariant measure of an ergodic, stochastically forced nonlinear system.
Boumediene Hamzi is currently a Senior Scientist at the Department of Computing and Mathematical Sciences, Caltech. He is also co-leading the Research Interest Group on Machine Learning and Dynamical Systems at the Alan Turing Institute. Broadly speaking, his research is at the interface of Machine Learning and Dynamical Systems.
In person attendance is encouraged. If you are unable to physically attend, please register via the Ticketsource link provided and you will receive online joining instructions.