Offline-to-online Reinforcement Learning for Image-based Grasping with Scarce Demonstrations

Published: 29 Oct 2024, Last Modified: 03 Nov 2024CoRL 2024 Workshop MRM-D PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Image-based grasping, Reinforcement learning, Demonstrations
TL;DR: We propose a novel offline-to-online RL algorithm that enables real-life image-based grasping in two hours of interaction time
Abstract: Offline-to-online reinforcement learning (O2O RL) aims to obtain a continually improving policy as it interacts with the environment, while ensuring the initial policy behaviour is satisficing. This satisficing behaviour is necessary for robotic manipulation where random exploration can be costly due to catastrophic failures and time. O2O RL is especially compelling when we can only obtain a scarce amount of (potentially suboptimal) demonstrations—a scenario where behavioural cloning is known to suffer from distribution shift. In this work, we propose a novel O2O RL algorithm that can learn in a real-life image-based robotic vacuum grasping task with a small number of demonstrations. The proposed algorithm replaces the target network in off-policy actor-critic algorithms with a regularization technique inspired by neural tangent kernel. We demonstrate empirically that the proposed algorithm exhibits satisficing behaviour after the offline phase and can further reach above 90% success rate in under two hours of interaction time, with only 50 human demonstrations.
Submission Number: 17
Loading