Keywords: Reinforcement Learning, Sample-Efficient, Action Discretization
TL;DR: We present a sample-efficient RL algorithm that can be deployed in real-robot experiments. We train RL agents to zoom-into a continuous action space in a coarse-to-fine manner.
Abstract: Despite recent advances in improving the sample-efficiency of reinforcement learning (RL) algorithms, designing an RL algorithm that can be practically deployed in real-world environments remains a challenge. In this paper, we present Coarse-to-fine Reinforcement Learning (CRL), a framework that trains RL agents to zoom-into a continuous action space in a coarse-to-fine manner, enabling the use of stable, sample-efficient value-based RL algorithms for fine-grained continuous control tasks. Our key idea is to train agents that output actions by iterating the procedure of (i) discretizing the continuous action space into multiple intervals and (ii) selecting the interval with the highest Q-value to further discretize at the next level. We then introduce a concrete, value-based algorithm within the CRL framework called Coarse-to-fine Q-Network (CQN). Our experiments demonstrate that CQN significantly outperforms RL and behavior cloning baselines on 20 sparsely-rewarded RLBench manipulation tasks with a modest number of environment interactions and expert demonstrations. We also show that CQN robustly learns to solve real-world manipulation tasks within a few minutes of online training.
Supplementary Material: zip
Spotlight Video: mp4
Website: https://younggyo.me/cqn/
Code: https://github.com/younggyoseo/CQN
Publication Agreement: pdf
Student Paper: no
Submission Number: 133
Loading