Detect Everything with Few Examples

Published: 05 Sept 2024, Last Modified: 08 Nov 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Robot Vision, Object Detection and Recognition, Few-shot Learning
TL;DR: We introduce DE-ViT, a few-shot object detector without the need for finetuning, which establishes new state-of-the-art on all few-shot detection benchmarks (Pascal VOC, COCO, LVIS), and we evaluate DE-ViT with a real robot in sorting novel objects.
Abstract: Few-shot object detection aims at detecting novel categories given only a few example images. It is a basic skill for a robot to perform tasks in open environments. Recent methods focus on finetuning strategies, with complicated procedures that prohibit a wider application. In this paper, we introduce DE-ViT, a few-shot object detector without the need for finetuning. DE-ViT's novel architecture is based on a new region-propagation mechanism for localization. The propagated region masks are transformed into bounding boxes through a learnable spatial integral layer. Instead of training prototype classifiers, we propose to use prototypes to project ViT features into a subspace that is robust to overfitting on base classes. We evaluate DE-ViT on few-shot, and one-shot object detection benchmarks with Pascal VOC, COCO, and LVIS. DE-ViT establishes new state-of-the-art results on all benchmarks. Notably, for COCO, DE-ViT surpasses the few-shot SoTA by 15 mAP on 10-shot and 7.2 mAP on 30-shot and one-shot SoTA by 2.8 AP50. For LVIS, DE-ViT outperforms few-shot SoTA by 17 box APr. Further, we evaluate DE-ViT with a real robot by building a pick-and-place system for sorting novel objects based on example images. The videos of our robot demonstrations, the source code and the models of DE-ViT can be found at https://mlzxy.github.io/devit.
Supplementary Material: zip
Spotlight Video: mp4
Website: https://mlzxy.github.io/devit
Code: http://github.com/mlzxy/devit
Publication Agreement: pdf
Student Paper: yes
Submission Number: 70
Loading