Learning Discrete State Abstractions With Deep Variational InferenceDownload PDF

Published: 21 Dec 2020, Last Modified: 12 Mar 2024AABI2020Readers: Everyone
Keywords: reinforcement learning, deep learning, abstraction, bisimulation, variational inference, information bottleneck
TL;DR: Our method learns to represent image states of an environment as discrete symbols; we can then find optimal policies using efficient tabular approaches.
Abstract: Abstraction is crucial for effective sequential decision making in domains with large state spaces. In this work, we propose an information bottleneck method for learning approximate bisimulations, a type of state abstraction. We use a deep neural encoder to map states onto continuous embeddings. We map these embeddings onto a discrete representation using an action-conditioned hidden Markov model, which is trained end-to-end with the neural network. Our method is suited for environments with high-dimensional states and learns from a stream of experience collected by an agent acting in a Markov decision process. Through this learned discrete abstract model, we can efficiently plan for unseen goals in a multi-goal Reinforcement Learning setting. We test our method in simplified robotic manipulation domains with image states. We also compare it against previous model-based approaches to finding bisimulations in discrete grid-world-like environments. Source code is available at \url{github.com} and will be linked after the review period.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2003.04300/code)
1 Reply

Loading