Stabilizing the Kuramoto–Sivashinsky Equation Using Deep Reinforcement Learning with a DeepONet Prior

Published: 19 Jun 2025, Last Modified: 12 Jul 20254th Muslims in ML Workshop co-located with ICML 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Submission Track: Track 1: Machine Learning Research by Muslim Authors
Keywords: Reinforcement Learning, DeepONet, Kuramoto–Sivashinsky Equation, Deep Deterministic Policy Gradient (DDPG), Nonlinear System Control
TL;DR: A DeepONet-guided reinforcement learning framework using DDPG for superior control and energy efficiency.
Abstract: This paper presents a novel reinforcement learning framework that leverages DeepONet priors to stabilize the Kuramoto–Sivashinsky (KS) equation. DeepONet first learns a generalized control operator offline, which is refined online using Deep Deterministic Policy Gradient (DDPG) to adapt to trajectory-specific dynamics. The approach achieves a 55\% energy reduction within 0.2 time units and narrows chaotic fluctuations significantly, outperforming traditional feedback control. DeepONet reduces MSE by 99.3\%, while the RL agent improves mean episode reward by 59.3\%. The method offers a scalable and effective solution for controlling complex, high-dimensional nonlinear systems.
Submission Number: 22
Loading