TL;DR: We introduce a principled behavior-regularized reinforcement learning framework tailored for diffusion-based policies in offline scenario.
Abstract: Behavior regularization, which constrains the policy to stay close to some behavior policy, is widely used in offline reinforcement learning (RL) to manage the risk of hazardous exploitation of unseen actions. Nevertheless, existing literature on behavior-regularized RL primarily focuses on explicit policy parameterizations, such as Gaussian policies. Consequently, it remains unclear how to extend this framework to more advanced policy parameterizations, such as diffusion models. In this paper, we introduce BDPO, a principled behavior-regularized RL framework tailored for diffusion-based policies, thereby combining the expressive power of diffusion policies and the robustness provided by regularization. The key ingredient of our method is to calculate the Kullback-Leibler (KL) regularization analytically as the accumulated discrepancies in reverse-time transition kernels along the diffusion trajectory. By integrating the regularization, we develop an efficient two-time-scale actor-critic RL algorithm that produces the optimal policy while respecting the behavior constraint. Comprehensive evaluations conducted on synthetic 2D tasks and continuous control tasks from the D4RL benchmark validate its effectiveness and superior performance.
Lay Summary: This paper presents Behavior-Regularized Diffusion Policy Optimization (BDPO), a principled and efficient offline RL framework that integrates diffusion-based policy into offline RL. By introducing the pathwise KL regularization across intermediate diffusion steps and employing a two-time-scale actor-critic optimization, BDPO achieves theoretical grounding, efficient training, and superior empirical performance across standard benchmarks.
Link To Code: https://github.com/typoverflow/flow-rl
Primary Area: Reinforcement Learning->Batch/Offline
Keywords: Behavior-regularized Reinforcement Learning, Diffusion Policy, Offline Reinforcement Learning
Submission Number: 9788
Loading