Decoupling feature extraction from policy learning: assessing benefits of state representation learning in goal based roboticsDownload PDF

27 Sept 2018 (modified: 29 Sept 2024)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Scaling end-to-end reinforcement learning to control real robots from vision presents a series of challenges, in particular in terms of sample efficiency. Against end-to-end learning, state representation learning can help learn a compact, efficient and relevant representation of states that speeds up policy learning, reducing the number of samples needed, and that is easier to interpret. We evaluate several state representation learning methods on goal based robotics tasks and propose a new unsupervised model that stacks representations and combines strengths of several of these approaches. This method encodes all the relevant features, performs on par or better than end-to-end learning, and is robust to hyper-parameters change.
Keywords: reinforcement learning, state representation learning, feature extraction, robotics, deep learning
TL;DR: We evaluate the benefits of decoupling feature extraction from policy learning in robotics and propose a new way of combining state representation learning methods.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/decoupling-feature-extraction-from-policy/code)
8 Replies

Loading