Unsupervised Deep Learning of State Representation Using Robotic Priors

Timothee LESORT, David FILLIAT

Nov 04, 2016 (modified: Jan 19, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: Our understanding of the world depends highly on how we represent it. Using background knowledge about its complex underlying physical rules, our brain can produce intuitive and simplified representations which it can easily use to solve problems. The approach of this paper aims to reproduce this simplification process using a neural network to produce a simple low dimensional state representation of the world from images acquired by a robot. As proposed in Jonschkowski & Brock (2015), we train the neural network in an unsupervised way, using the "a priori" knowledge we have about the world as loss functions called "robotic priors" that we implemented through a siamese network. This approach has been used to learn a one dimension representation of a Baxter head position from raw images. The experiment resulted in a 97,7% correlation between the learned representation and the ground truth, and show that relevant visual features form the environment are learned.
  • TL;DR: This paper introduces a method for training a deep neural network to learn a representation of a robot's environment state using a priori knowledge.
  • Conflicts: ensta-paristech.fr, cpe.fr, umontreal.ca
  • Keywords: Deep learning, Computer vision, Unsupervised Learning