Benchmarking Unsupervised Representation Learning for Continuous ControlDownload PDF

Published: 25 Jun 2020, Last Modified: 05 May 2023RobRetro 2020Readers: Everyone
Abstract: We address the problem of learning reusable state representations from a non-stationary stream of high-dimensional observations. This is important for areas that employ Reinforcement Learning (RL), which yields non-stationary data distributions during training. Unsupervised approaches can be trained on such data streams to produce low-dimensional latent embeddings, which could be reused on domains with different dynamics and rewards. However, there is a need to adequately evaluate the quality of the resulting representations. We propose an evaluation suite that measures alignment between the learned latent states and the true low-dimensional states. Using this suite, we benchmark several widely used unsupervised learning approaches. This uncovers the strengths and limitations of existing approaches that impose additional constraints/assumptions on the latent space.
Keywords: Unsupervised Representation Learning, Robotics Simulation, Benchmarking
TL;DR: We propose an evaluation suite that measures alignment between learned latent states and true low-dimensional states; we benchmark several widely used unsupervised learning approaches.
3 Replies

Loading