Learning Awareness Models

Brandon Amos, Laurent Dinh, Serkan Cabi, Thomas Rothörl, Alistair Muldal, Tom Erez, Yuval Tassa, Nando de Freitas, Misha Denil

Feb 15, 2018 (modified: Feb 15, 2018) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: We consider the setting of an agent with a fixed body interacting with an unknown and uncertain external world. We show that by maximizing the entropy of predictions about the body---touch sensors, proprioception and vestibular information---we are able to learn dynamic models of the body that can be used for control. In spite of being trained with only internally available signals, these dynamic body models come to represent external objects through the necessity of predicting their effects on the agent's own body. Our dynamics model is able to successfully predict distributions over 132 sensor readings over 100 steps into the future. We demonstrate that even when the body is no longer in contact with an object, the latent variables of the dynamics model continue to represent its shape. That is, the model learns holistic persistent representations of objects in the world, even though the only training signals are body signals. We also collect data from a real robotic hand and show that the same models can be used to answer questions about properties of objects in the real world.
  • TL;DR: We train predictive models on proprioceptive information and show they represent properties of external objects.
  • Keywords: Awareness, Prediction, Seq2seq, Robots

Loading