Analytical Lyapunov Function Discovery: An RL-based Generative Approach

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-NC-SA 4.0
Abstract: Despite advances in learning-based methods, finding valid Lyapunov functions for nonlinear dynamical systems remains challenging. Current neural network approaches face two main issues: challenges in scalable verification and limited interpretability. To address these, we propose an end-to-end framework using transformers to construct analytical Lyapunov functions (local), which simplifies formal verification, enhances interpretability, and provides valuable insights for control engineers. Our framework consists of a transformer-based trainer that generates candidate Lyapunov functions and a falsifier that verifies candidate expressions and refines the model via risk-seeking policy gradient. Unlike Alfarano et al. (2024), which utilizes pre-training and seeks global Lyapunov functions for low-dimensional systems, our model is trained from scratch via reinforcement learning (RL) and succeeds in finding local Lyapunov functions for *high-dimensional* and *non-polynomial* systems. Given the symbolic nature of the Lyapunov function candidates, we employ efficient optimization methods for falsification during training and formal verification tools for the final verification. We demonstrate the efficiency of our approach on a range of nonlinear dynamical systems with up to ten dimensions and show that it can discover Lyapunov functions not previously identified in the control literature. Full implementation is available on [Github](https://github.com/JieFeng-cse/Analytical-Lyapunov-Function-Discovery).
Lay Summary: Certifying the stability of complex systems—like robots or aircraft—requires special mathematical formulas called Lyapunov functions, which satisfies two Lyapunov conditions on the function values. While deep learning has shown promise in finding such functions, current methods often struggle with condition verification and are difficult for engineers to interpret. We introduce a new learning-based approach that uses a symbolic transformer model to generate Lyapunov functions in clear, analytical form. This makes output expressions easier to understand and verify. Unlike prior methods that rely on extensive pre-training and are limited to low-dimensional systems, our model learns from scratch and can handle more complex, high-dimensional systems. It combines a learning component that proposes candidate expressions with a verification tool that checks Lyapunov conditions on candidate expressions and iteratively improves the transformer model by reinforcement learning (RL) techniques. Compared with existing works, our method not only improves the efficiency of discovery process but also identifies new Lyapunov functions that experts hadn’t identified before, offering valuable insights for designing safe and stable systems.
Link To Code: https://github.com/JieFeng-cse/Analytical-Lyapunov-Function-Discovery
Primary Area: Applications->Everything Else
Keywords: Control Theory, Lyapunov Function, Symbolic Transformer, Risk-seeking Policy Optimization, Analytical Function Discovery, Stability
Submission Number: 4067
Loading