Abstract: Deep neural networks have reached high accuracy on
object detection but their success hinges on large amounts
of labeled data. To reduce the labels dependency, various active learning strategies have been proposed, based
on the confidence of the detector. However, these methods are biased towards high-performing classes and lead to
acquired datasets that are not good representatives of the
testing set data. In this work, we propose a unified framework for active learning, that considers both the uncertainty
and the robustness of the detector, ensuring that the network performs well in all classes. Furthermore, our method
leverages auto-labeling to suppress a potential distribution
drift while boosting the performance of the model. Experiments on PASCAL VOC07+12 and MS-COCO show that
our method consistently outperforms a wide range of active learning methods, yielding up to a 7.7% improvement
in mAP, or up to 82% reduction in labeling cost. Code is
available at https://github.com/NVlabs/AL-SSL.
Loading