A Policy Gradient Method for Task-Agnostic ExplorationDownload PDF

12 Jun 2020 (modified: 29 Sept 2024)LifelongML@ICML2020Readers: Everyone
Student First Author: Yes
TL;DR: We present a novel policy-search algorithm to learn a task-agnostic exploration policy in continuous domains, which allows to solve a variety of meaningful goal-based tasks downstream.
Keywords: Unsupervised Reinforcement Learning, Intrinsic Motivation, Task-Agnostic Exploration
Abstract: In a reward-free environment, what is a suitable intrinsic objective for an agent to pursue so that it can learn an optimal task-agnostic exploration policy? In this paper, we argue that the entropy of the state distribution induced by limited-horizon trajectories is a sensible target. Especially, we present a novel and practical policy-search algorithm, Maximum Entropy POLicy optimization (MEPOL), to learn a policy that maximizes a non-parametric, $k$-nearest neighbors estimate of the state distribution entropy. In contrast to known methods, MEPOL is completely model-free as it requires neither to estimate the state distribution of any policy nor to model transition dynamics. Then, we empirically show that MEPOL allows learning a maximum-entropy exploration policy in high-dimensional, continuous-control domains, and how this policy facilitates learning a variety of meaningful reward-based tasks downstream.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 6 code implementations](https://www.catalyzex.com/paper/a-policy-gradient-method-for-task-agnostic/code)
0 Replies

Loading