Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is ComingDownload PDF

25 Sept 2019 (modified: 22 Oct 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: deep learning, object detection, robustness, neural networks, data augmentation, autonomous driving
TL;DR: A benchmark to asses the robustness of object detection models towards common image corruptions. Like classification models, object detection models perform worse on corrupted images. Training with stylized data reduces the gap for all corruptions.
Abstract: The ability to detect objects regardless of image distortions or weather conditions is crucial for real-world applications of deep learning like autonomous driving. We here provide an easy-to-use benchmark to assess how object detection models perform when image quality degrades. The three resulting benchmark datasets, termed PASCAL-C, COCO-C and Cityscapes-C, contain a large variety of image corruptions. We show that a range of standard object detection models suffer a severe performance loss on corrupted images (down to 30-60% of the original performance). However, a simple data augmentation trick - stylizing the training images - leads to a substantial increase in robustness across corruption type, severity and dataset. We envision our comprehensive benchmark to track future progress towards building robust object detection models. Benchmark, code and data are available at: (hidden for double blind review)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/arxiv:1907.07484/code)
Original Pdf: pdf
10 Replies

Loading