Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Unsupervised Learning of Goal Spaces for Intrinsically Motivated Exploration
Nov 03, 2017 (modified: Nov 03, 2017)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:Intrinsically motivated goal exploration algorithms enable machines to explore and discover a diversity of policies in large and complex environments. These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous action and state spaces. However, they have so far assumed that self-generated goals are sampled in a specifically engineered space. In this work, we propose to use deep representation learning algorithms to learn a goal space, leveraging observations of world changes produced by another agent. We present experiments with a simulated robot arm interacting with an object, and we study how the performances of exploration algorithms on such learned representations relate with their performances on engineered representations. We also uncover a link between the exploration performances and the quality of the learned representation regarding the underlying state space.
TL;DR:We propose a novel Intrinsically Motivated Goal Exploration architecture with unsupervised learning of a space where goals can be sampled, and compare systematically various representation learning algorithms in this context.