Keywords: symmetric divergences; behavior regularization; offline reinforcement learning
TL;DR: we study the problem of behavior regularized policy optimiza
Abstract: Behavior Regularized Policy Optimization (BRPO) leverages asymmetric (divergence) regularization to mitigate the distribution shift in offline Reinforcement Learning.
This paper is the first to study the open question of symmetric regularization.
We show that symmetric regularization does not permit an analytic optimal policy $\pi*$, posing a challenge to practical utility of symmetric BRPO.
We approximate $\pi^*$ by the Taylor series of Pearson-Vajda $\chi^n$ divergences and show that an analytic policy expression exists only when the series is capped at $n=5$.
To compute the solution in a numerically stable manner, we propose to Taylor expand the conditional symmetry term of the symmetric divergence loss, leading to a novel algorithm: Symmetric $f$-Actor Critic (S$f$-AC).
S$f$-AC achieves consistently strong results across various D4RL MuJoCo tasks. Additionally, S$f$-AC avoids per-environment failures observed in IQL, SQL, XQL and AWAC,
opening up possibilities for more diverse and effective regularization choices for offline RL.
Supplementary Material: zip
Primary Area: reinforcement learning
Submission Number: 3986
Loading