Keywords: robotic manipulation, representation learning, keypoints, policy learning
TL;DR: We learn occlusion- and scale-invariant multi-object keypoints as a representation for robotic manipulation in a self-supervised manner.
Abstract: In recent years, policy learning methods using either reinforcement or imitation have made significant progress.
However, both techniques still suffer from being computationally expensive and requiring large amounts of training data. This problem is especially prevalent in real-world robotic manipulation tasks, where access to ground truth scene features is not available and policies are instead learned from raw camera observations.
In this paper, we demonstrate the efficacy of learning image keypoints via the Dense Correspondence pretext task for downstream policy learning.
Extending prior work to challenging multi-object scenes, we show that our model can be trained to deal with important problems in representation learning, primarily scale-invariance and occlusion.
We evaluate our approach on diverse robot manipulation tasks, compare it to other visual representation learning approaches, and demonstrate its flexibility and effectiveness for sample-efficient policy learning.
2 Replies
Loading