Unsupervised Learning of State Representations for Multiple Tasks

Antonin Raffin, Sebastian Höfer, Rico Jonschkowski, Oliver Brock, Freek Stulp

Nov 04, 2016 (modified: Jan 13, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: We present an approach for learning state representations in multi-task reinforcement learning. Our method learns multiple low-dimensional state representations from raw observations in an unsupervised fashion, without any knowledge of which task is executed, nor of the number of tasks involved. The method is based on a gated neural network architecture, trained with an extension of the learning with robotic priors objective. In simulated experiments, we show that our method is able to learn better state representations for reinforcement learning, and we analyze why and when it manages to do so.
  • TL;DR: Learning method for automatic detection of multiple reinforcement tasks and extraction of state representations from raw observations
  • Conflicts: tu-berlin.de, dlr.de, ensta-paristech.fr
  • Keywords: Reinforcement Learning, Unsupervised Learning