Efficient Entropy For Policy Gradient with Multi-Dimensional Action SpaceDownload PDF

12 Feb 2018 (modified: 05 May 2023)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: This paper considers entropy bonus, which is used to encourage exploration in policy gradient. In the case of high-dimensional action spaces, calculating the entropy and its gradient requires enumerating all the actions in the action space and running forward and backpropagation for each action, which may be computationally infeasible. We develop several novel unbiased estimators for the entropy bonus and its gradient. We apply these estimators to several models for the parameterized policies, including Independent Sampling, CommNet, Autoregressive with Modified MDP, and Autoregressive with LSTM. Finally, we test our algorithms on a multi-hunter multi-rabbit grid environment. The results show that our entropy estimators substantially improve performance with marginal additional computational cost.
Keywords: deep reinforcement learning, policy gradient, multidimensional action space, entropy bonus, entropy regularization, discrete action space
TL;DR: Unbiased policy entropy estimators and policy parameterization for MDP with large multidimensional discrete action space
3 Replies

Loading