Natural Adversarial ObjectsDownload PDF

07 Jun 2021 (modified: 24 May 2023)Submitted to NeurIPS 2021 Datasets and Benchmarks Track (Round 1)Readers: Everyone
Keywords: robustness, dataset, adversarial example
TL;DR: We introduce a challenging test set to evaluate the robustness of object detection models
Abstract: Although state-of-the-art object detection methods have shown compelling performance, models often are not robust to adversarial attacks and out-of-distribution data. We introduce a new dataset, Natural Adversarial Objects (NAO), to evaluate the robustness of object detection models. NAO contains 7,936 images and 13,604 objects that are unmodified, but cause state-of-the-art detection models to misclassify with high confidence. The mean average precision (mAP) of EfficientDet-D7 drops 68.3\% when evaluated on NAO compared to the standard MSCOCO validation set. We investigate why examples in NAO are difficult to detect and classify. Experiments of shuffling image patches reveal that models are overly sensitive to local texture. Additionally, using integrated gradients and background replacement, we find that the detection model is reliant on pixel information within the bounding box, and insensitive to the background context when predicting class labels.
Supplementary Material: zip
URL: https://drive.google.com/drive/folders/15P8sOWoJku6SSEiHLEts86ORfytGezi8
9 Replies

Loading