You Only Look at One: Category-Level Object Representations for Pose Estimation From a Single ExampleDownload PDF

16 Jun 2022, 10:45 (modified: 16 Nov 2022, 08:13)CoRL 2022 PosterReaders: Everyone
Student First Author: yes
Keywords: pose estimation, object representations, one-shot
TL;DR: Real-time pose estimation that generalises to whole object categories after just inspecting one object
Abstract: In order to meaningfully interact with the world, robot manipulators must be able to interpret objects they encounter. A critical aspect of this interpretation is pose estimation: inferring quantities that describe the position and orientation of an object in 3D space. Most existing approaches to pose estimation make limiting assumptions, often working only for specific, known object instances, or at best generalising to an object category using large pose-labelled datasets. In this work, we present a method for achieving category-level pose estimation by inspection of just a single object from a desired category. We show that we can subsequently perform accurate pose estimation for unseen objects from an inspected category, and considerably outperform prior work by exploiting multi-view correspondences. We demonstrate that our method runs in real-time, enabling a robot manipulator to rearrange previously unseen objects faithfully in terms of placement and orientation. Finally, we showcase our method in a continual learning setting, with a robot able to determine whether objects belong to known categories, and if not, use active perception to produce a one-shot category representation for subsequent pose estimation
Supplementary Material: zip
19 Replies