Beyond Penalization: Diffusion-based Out-of-Distribution Detection and Selective Regularization in Offline Reinforcement Learning
Keywords: Offline RL, Diffusion Model, Out-of-Distribution (OOD) Detection
TL;DR: We propose DOSER, a diffusion-based framework for OOD detection and selective regularization in offline RL. By leveraging diffusion reconstruction errors to handle OOD actions, DOSER achieves state-of-the-art performance on D4RL benchmarks.
Abstract: Offline reinforcement learning (RL) faces a critical challenge of overestimating the value of out-of-distribution (OOD) actions. Existing methods mitigate this issue by penalizing unseen samples, yet they fail to accurately identify OOD actions and may suppress beneficial exploration beyond the behavioral support. Although several methods have been proposed to differentiate OOD samples with distinct properties, they typically rely on restrictive assumptions about the data distribution and remain limited in discrimination ability. To address this problem, we propose \textbf{DOSER} (\textbf{D}iffusion-based \textbf{O}OD Detection and \textbf{SE}lective \textbf{R}egularization), a novel framework that goes beyond uniform penalization. DOSER trains two diffusion models to capture the behavior policy and state distribution, using single-step denoising reconstruction error as a reliable OOD indicator. During policy optimization, it further distinguishes between beneficial and detrimental OOD actions by evaluating predicted transitions, selectively suppressing risky actions while encouraging exploration of high-potential ones. Theoretically, we establish stable convergence with bounded value estimates under $\gamma$-contraction guarantees. Across extensive offline RL benchmarks, DOSER consistently attains superior performance to prior methods, especially on suboptimal datasets.
Primary Area: reinforcement learning
Submission Number: 22341
Loading