Keywords: reinforcement learning, guarantees, representation learning, model-based
TL;DR: We consider safe policy improvement for on-policy reinforcement learning in general-state spaces; we provide safe policy improvement guarantees tailored to (learned) world models and representation learning.
Abstract: Safe policy improvement (SPI) offers theoretical control over policy updates, yet existing guarantees largely concern offline, tabular reinforcement learning (RL). We study SPI in general online settings, when combined with world model and representation learning. We develop a theoretical framework showing that restricting policy updates to a well-defined neighborhood of the current policy ensures monotonic improvement and convergence. This analysis links transition and reward prediction losses to representation quality, yielding online, ''deep'' analogues of classical SPI theorems from the offline RL literature. Building on these results, we introduce DeepSPI, a principled on-policy algorithm that couples local transition and reward losses with regularised policy updates. On the ALE-57 benchmark, DeepSPI matches or exceeds strong baselines, including PPO and DeepMDPs, while retaining theoretical guarantees.
Supplementary Material: zip
Primary Area: reinforcement learning
Submission Number: 9944
Loading