Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement LearningDownload PDF

12 Oct 2021 (modified: 08 Sept 2024)Deep RL Workshop NeurIPS 2021Readers: Everyone
Keywords: deep rl, off-policy rl, image-based rl
TL;DR: Sample and compute efficient model-free RL algorithm that solves humanoid directly from pixels
Abstract: We present DrQ-v2, a model-free reinforcement learning (RL) algorithm for visual continuous control. DrQ-v2 builds on DrQ, an off-policy actor-critic approach that uses data augmentation to learn directly from pixels. We introduce several improvements that yield state-of-the-art results on the DeepMind Control Suite. Notably, DrQ-v2 is able to solve complex humanoid locomotion tasks directly from pixel observations, previously unattained by model-free RL. DrQ-v2 is conceptually simple, easy to implement, and provides significantly better computational footprint compared to prior work, with the majority of tasks taking just 8 hours to train on a single GPU. Finally, we publicly release DrQ-v2's implementation to provide RL practitioners with a strong and computationally efficient baseline.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/mastering-visual-continuous-control-improved/code)
0 Replies

Loading