Lyapunov Learning at the Onset of Chaos

Published: 09 Jun 2025, Last Modified: 09 Jun 2025HiLD at ICML 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Chaotic Systems, Regime Shift, Non-Stationary Time Series, Dynamical Systems, Lyapunov Exponent, Continual Learning, Edge of Chaos, Dynamic Adaptation, Deep Neural Networks, Regularization
TL;DR: This work introduces Lyapunov Learning, a regularizer that steers neural networks toward the onset of chaos to improve adaptability in non-stationary systems and enhance resilience to abrupt regime shifts.
Abstract: Handling regime shifts and non-stationary time series in deep learning systems presents a significant challenge. In the case of online learning, when new information is introduced, it can disrupt previously stored data and alter the model's overall paradigm, especially with non-stationary data sources. Therefore, it is crucial for neural systems to quickly adapt to new paradigms while preserving essential past knowledge relevant to the overall problem. In this paper, we propose a novel training algorithm for neural networks called $\textit{Lyapunov Learning}$. This approach leverages the properties of nonlinear chaotic dynamical systems to prepare the model for potential regime shifts. Drawing inspiration from Stuart Kauffman's Adjacent Possible theory, we leverage local unexplored regions of the solution space to enable flexible adaptation. The neural network is designed to operate at the edge of chaos, where the maximum Lyapunov exponent, indicative of a system's sensitivity to small perturbations, evolves around zero over time. Our approach demonstrates effective and significant improvements in experiments involving regime shifts in non-stationary systems. In particular, we train a neural network to deal with an abrupt change in Lorenz's chaotic system parameters. The neural network equipped with Lyapunov learning significantly outperforms the regular training, increasing the loss ratio by about $96\%$.
Student Paper: Yes
Submission Number: 26
Loading