Keywords: Koopman operator theory, transfer learning, transformers, physics-informed embeddings, dynamical systems
TL;DR: We demonstrate that Koopman embeddings allow for effective transformer-based transfer learning from chaotic system prediction to safety control tasks.
Abstract: This paper investigates the generalisability of Koopman-based representations for chaotic dynamical systems, focusing on their transferability across prediction and control tasks. Using the Lorenz system as a testbed, we propose a three-stage methodology: learning Koopman embeddings through autoencoding, pre-training a transformer on next-state prediction, and fine-tuning for safety-critical control. Our results show that Koopman embeddings outperform both standard and physics-informed PCA baselines, achieving accurate and data-efficient performance. Notably, fixing the pre-trained transformer weights during fine-tuning leads to no performance degradation, indicating that the learned representations capture reusable dynamical structure rather than task-specific patterns. These findings support the use of Koopman embeddings as a foundation for multi-task learning in physics-informed machine learning.
Serve As Reviewer: ~Kyriakos_Hjikakou1
Submission Number: 9
Loading