State Advantage Weighting for Offline RLDownload PDF

01 Mar 2023 (modified: 03 Nov 2024)Submitted to Tiny Papers @ ICLR 2023Readers: Everyone
Keywords: state advantage, offline reinforcement learning, continuous control
TL;DR: We investigate QSS learning for offline RL, where we leverage state advantage weighting for update.
Abstract: We present \textit{state advantage weighting} for offline reinforcement learning (RL). In contrast to action advantage $A(s,a)$ that we commonly adopt in QSA learning, we leverage state advantage $A(s,s^\prime)$ and QSS learning for offline RL, hence decoupling the action from values. We expect the agent can get to the high-reward state and the action is determined by how the agent can get to that corresponding state. Experiments on D4RL datasets show that our proposed method can achieve remarkable performance against the common baselines. Our code is publicly available in https://github.com/dmksjfl/SAW.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/state-advantage-weighting-for-offline-rl/code)
6 Replies

Loading