On All-Action Policy GradientsDownload PDF

08 Oct 2022 (modified: 03 Nov 2024)Deep RL Workshop 2022Readers: Everyone
Keywords: policy gradient, model-based RL, bias-variance trade-off
TL;DR: We derive an optimality condition under which it is preferable to use many-actions policy gradient as compared to single-action policy gradient; we propose a module that leverages learned dynamics model in all-action sampling.
Abstract: In this paper, we analyze the variance of stochastic policy gradient with many action samples per state (all-action SPG). We decompose the variance of SPG and derive an optimality condition for all-action SPG. The optimality condition shows when all-action SPG should be preferred over single-action counterpart and allows to determine a variance-minimizing sampling scheme in SPG estimation. Furthermore, we propose dynamics-all-action (DAA) module, an augmentation that allows for all-action sampling without manipulation of the environment. DAA addresses the problems associated with using a Q-network for all-action sampling and can be readily applied to any on-policy SPG algorithm. We find that using DAA with a canonical on-policy algorithm (PPO) yields better sample efficiency and higher policy returns on a variety of continuous action environments.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/on-all-action-policy-gradients/code)
0 Replies

Loading