Learning Semantically Meaningful Representations Through EmbodimentDownload PDF

25 Sep 2019 (modified: 24 Dec 2019)ICLR 2020 Conference Blind SubmissionReaders: Everyone
  • Original Pdf: pdf
  • Keywords: reinforcement learning, deep learning, embodied, embodiment, embodied cognition, representation learning, representations, sparse coding
  • TL;DR: We show how a deep neural network can learn meaningful and robust representations of visual input when trained in an embodied framework.
  • Abstract: How do humans acquire a meaningful understanding of the world with little to no supervision or semantic labels provided by the environment? Here we investigate embodiment and a closed loop between action and perception as one key component in this process. We take a close look at the representations learned by a deep reinforcement learning agent that is trained with visual and vector observations collected in a 3D environment with sparse rewards. We show that this agent learns semantically meaningful and stable representations of its environment without receiving any semantic labels. Our results show that the agent learns to represent the action relevant information extracted from pixel input in a wide variety of sparse activation patterns. The quality of the representations learned shows the strength of embodied learning and its advantages over fully supervised approaches with regards to robustness and generalizability.
  • Code: https://github.com/EmbodiedLearning/ICLR-Submission-2020
11 Replies