When Robustness Doesn’t Promote Robustness: Synthetic vs. Natural Distribution Shifts on ImageNetDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: robustness, distribution shift, image corruptions, adversarial robustness, reliable machine learning
TL;DR: We compare current robustness interventions and find that none promote robustness on natural distribution shifts.
Abstract: We conduct a large experimental comparison of various robustness metrics for image classification. The main question of our study is to what extent current synthetic robustness interventions (lp-adversarial examples, noise corruptions, etc.) promote robustness under natural distribution shifts occurring in real data. To this end, we evaluate 147 ImageNet models under 199 different evaluation settings. We find that no current robustness intervention improves robustness on natural distribution shifts beyond a baseline given by standard models without a robustness intervention. The only exception is the use of larger training datasets, which provides a small increase in robustness on one natural distribution shift. Our results indicate that robustness improvements on real data may require new methodology and more evaluations on natural distribution shifts.
Original Pdf: pdf
10 Replies

Loading