RRL: Resnet as representation for Reinforcement LearningDownload PDF

Published: 14 May 2022, Last Modified: 22 Oct 2023ICRA 2022 Workshop: RL for Manipulation OralReaders: Everyone
Keywords: Representation learning, Reinforcement learning, Manipulation, Robotics
Abstract: Generalist robots capable of performing dexterous, contact-rich manipulation tasks will enhance productivity and provide care in un-instrumented settings like homes. Such tasks warrant operations in real-world only using the robot's proprioceptive sensor such as onboard cameras, joint encoders, etc which can be challenging for policy learning owing to the high dimensionality and partial observability issues. We propose RRL: Resnet as representation for Reinforcement Learning -- a straightforward yet effective approach that can learn complex behaviors directly from proprioceptive inputs. RRL~fuses features extracted from pre-trained Resnet into the standard reinforcement learning pipeline and delivers results comparable to learning directly from the state. In a simulated dexterous manipulation benchmark, where the state of the art methods fails to make significant progress, RRL~delivers contact rich behaviors. The appeal of RRL~lies in its simplicity in bringing together progress from the fields of Representation Learning, Imitation Learning, and Reinforcement Learning. Its effectiveness in learning behaviors directly from visual inputs with performance and sample efficiency matching learning directly from the state, even in complex high dimensional domains, is far from obvious.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2107.03380/code)
2 Replies

Loading