Discrete Compositional Representations as an Abstraction for Goal Conditioned Reinforcement LearningDownload PDF

Published: 31 Oct 2022, Last Modified: 28 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: goal conditioned RL, discrete bottleneck, vq-vae, self-supervised representations, hierarchical RL
TL;DR: Discrete bottleneck on self supervised representations, for learning abstractions of goal observations in goal conditioned hierarchical reinforcement learning
Abstract: Goal-conditioned reinforcement learning (RL) is a promising direction for training agents that are capable of solving multiple tasks and reach a diverse set of objectives. How to \textit{specify} and \textit{ground} these goals in such a way that we can both reliably reach goals during training as well as generalize to new goals during evaluation remains an open area of research. Defining goals in the space of noisy, high-dimensional sensory inputs is one possibility, yet this poses a challenge for training goal-conditioned agents, or even for generalization to novel goals. We propose to address this by learning compositional representations of goals and processing the resulting representation via a discretization bottleneck, for coarser specification of goals, through an approach we call DGRL. We show that discretizing outputs from goal encoders through a bottleneck can work well in goal-conditioned RL setups, by experimentally evaluating this method on tasks ranging from maze environments to complex robotic navigation and manipulation tasks. Additionally, we show a theoretical result which bounds the expected return for goals not observed during training, while still allowing for specifying goals with expressive combinatorial structure.
Supplementary Material: pdf
36 Replies

Loading