Laplace Transform Based Low-Complexity Learning of Continuous Markov Semigroups

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We develop finite sample learning bounds for infering the spectral decomposition of a strongly continuous semigroup of Markov operators from a single trajectory of non-iid data.
Abstract: Markov processes serve as universal models for many real-world random processes. This paper presents a data-driven approach to learning these models through the spectral decomposition of the infinitesimal generator (IG) of the Markov semigroup. Its unbounded nature complicates traditional methods such as vector-valued regression and Hilbert-Schmidt operator analysis. Existing techniques, including physics-informed kernel regression, are computationally expensive and limited in scope, with no recovery guarantees for transfer operator methods when the time-lag is small. We propose a novel method leveraging the IG's resolvent, characterized by the Laplace transform of transfer operators. This approach is robust to time-lag variations, ensuring accurate eigenvalue learning even for small time-lags. Our statistical analysis applies to a broader class of Markov processes than current methods while reducing computational complexity from quadratic to linear in the state dimension. Finally, we demonstrate our theoretical findings in several experiments.
Lay Summary: Many natural and engineered systems—like weather, markets, or brain activity—can be modeled as Markov processes, where the future depends only on the present. But learning accurate models of these systems from data is difficult, especially when observations are noisy, irregular, or limited in time. Existing methods struggle when the time between observations is small or when the system is high-dimensional, often requiring expensive computations and offering limited guarantees. We tackled this by developing a new approach that learns key features of the system—its “modes of behavior”—through the infinitesimal generator, a mathematical object that describes how the system evolves continuously in time. Instead of trying to work with this complex object directly, we use its resolvent, which is more stable and computable from data using a trick involving Laplace transforms. Our method is fast (scales linearly with system size), works well even with short time intervals, and applies to a wider class of systems than existing techniques. It gives researchers a more reliable and efficient way to understand and simulate complex dynamic processes from data.
Link To Code: https://github.com/vladi-iit/LaRRR/
Primary Area: Theory->Learning Theory
Keywords: continuous Markov processes, statistical learning theory, data-driven dynamical systems, spectral decomposition, kernel methods
Submission Number: 10886
Loading