SAC Flow: Sample-Efficient Reinforcement Learning of Flow-Based Policies via Velocity-Reparameterized Sequential Modeling
Keywords: Flow-based policy, Sample-Efficient Reinforcement Learning, Soft actor critic, Sequential Modeling
TL;DR: We fix the unstable training of flow-based policies in off-policy RL by viewing them as RNNs, using GRU/Transformer designs to tame exploding gradients and achieve SOTA sample efficiency.
Abstract: Training expressive flow-based policies with off-policy reinforcement learning is notoriously unstable due to gradient pathologies in the multi-step action sampling process. We trace this instability to a fundamental connection: the flow rollout is algebraically equivalent to a residual recurrent computation, making it susceptible to the same vanishing and exploding gradients as RNNs. To address this, we reparameterize the velocity network using principles from modern sequential models, introducing two stable architectures: Flow-G, which incorporates a gated velocity, and Flow-T, which utilizes a decoded velocity. We then develop a practical SAC-based algorithm, enabled by a noise-augmented rollout, that facilitates direct end-to-end training of these policies. Our approach supports both from-scratch and offline-to-online learning and achieves state-of-the-art performance on continuous control and robotic manipulation benchmarks, eliminating the need for common workarounds like policy distillation or surrogate objectives. Anonymized code is available at \url{https://anonymous.4open.science/r/SAC-FLOW}
Primary Area: reinforcement learning
Submission Number: 4770
Loading