TL;DR: State and action embeddings which incorporate the dynamics improve exploration and RL from pixels.
Abstract: In this paper we consider self-supervised representation learning to improve sample efficiency in reinforcement learning (RL). We propose a forward prediction objective for simultaneously learning embeddings of states and actions. These embeddings capture the structure of the environment's dynamics, enabling efficient policy learning. We demonstrate that our action embeddings alone improve the sample efficiency and peak performance of model-free RL on control from low-dimensional states. By combining state and action embeddings, we achieve efficient learning of high-quality policies on goal-conditioned continuous control from pixel observations in only 1-2 million environment steps.
Code: https://github.com/dyne-submission/dynamics-aware-embeddings
Keywords: representation learning, reinforcement learning, rl
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 5 code implementations](https://www.catalyzex.com/paper/arxiv:1908.09357/code)
Original Pdf: pdf
13 Replies
Loading