Keywords: Gradient TD, Deep RL, Streaming RL
Abstract: Achieving fast and stable off-policy learning in deep reinforcement learning (RL) is challenging.
Most existing methods rely on semi-gradient temporal-difference (TD) methods for their simplicity and efficiency but are consequently susceptible to divergence.
While more principled approaches like Gradient TD (GTD) methods have strong convergence guarantees, they have rarely been used in deep RL.
Recent work introduced the Generalized Projected Bellman Error ($\overline{\text{GPBE}}$), enabling GTD methods to work efficiently with nonlinear function approximation.
However, this work is only limited to one-step methods, which are slow at credit assignment and require a large number of samples.
In this paper, we extend the $\overline{\text{GPBE}}$ objective to use multistep credit assignment based on the $\lambda$-return and derive three gradient-based methods that optimize this new objective.
We provide both a forward-view formulation compatible with experience replay and a backward-view formulation compatible with streaming algorithms.
Finally, we evaluate the proposed algorithms and show that they outperform both PPO and StreamQ in Mujoco and MinAtar environments, respectively.
Submission Number: 302
Loading