Deformable DETR: Deformable Transformers for End-to-End Object DetectionDownload PDF

Published: 12 Jan 2021, Last Modified: 03 Apr 2024ICLR 2021 OralReaders: Everyone
Keywords: Efficient Attention Mechanism, Deformation Modeling, Multi-scale Representation, End-to-End Object Detection
Abstract: DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10$\times$ less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach. Code is released at https://github.com/fundamentalvision/Deformable-DETR.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: Deformable DETR is an efficient and fast-converging end-to-end object detector. It mitigates the high complexity and slow convergence issues of DETR via a novel sampling-based efficient attention mechanism.
Supplementary Material: zip
Code: [![github](/images/github_icon.svg) fundamentalvision/Deformable-DETR](https://github.com/fundamentalvision/Deformable-DETR) + [![Papers with Code](/images/pwc_icon.svg) 16 community implementations](https://paperswithcode.com/paper/?openreview=gZ9hCDWe6ke)
Data: [COCO-O](https://paperswithcode.com/dataset/coco-o), [MS COCO](https://paperswithcode.com/dataset/coco), [SARDet-100K](https://paperswithcode.com/dataset/sardet-100k)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2010.04159/code)
13 Replies

Loading