Keywords: self-supervised learning, computer vision, representation learning, bin-picking
TL;DR: We propose a self-supervised training approach for learning view-invariant dense visual descriptors using image augmentations.
Abstract: We propose a self-supervised training approach for learning view-invariant
dense visual descriptors using image augmentations. Unlike existing
works, which often require complex datasets, such as registered RGBD sequences,
we train on an unordered set of RGB images. This allows for learning from a single
camera view, e.g., in an existing robotic cell with a fix-mounted camera. We
create synthetic views and dense pixel correspondences using data augmentations.
We find our descriptors are competitive to the existing methods, despite the simpler
data recording and setup requirements. We show that training on synthetic
correspondences provides descriptor consistency across a broad range of camera
views. We compare against training with geometric correspondence from multiple
views and provide ablation studies. We also show a robotic bin-picking experiment
using descriptors learned from a fix-mounted camera for defining grasp
preferences.
Student First Author: no
Supplementary Material: zip
19 Replies
Loading