Benchmarking Neural Network Robustness to Common Corruptions and PerturbationsDownload PDF

Published: 21 Dec 2018, Last Modified: 03 Apr 2024ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize.
Keywords: robustness, benchmark, convnets, perturbations
TL;DR: We propose ImageNet-C to measure classifier corruption robustness and ImageNet-P to measure perturbation robustness
Code: [![github](/images/github_icon.svg) hendrycks/robustness](https://github.com/hendrycks/robustness) + [![Papers with Code](/images/pwc_icon.svg) 12 community implementations](https://paperswithcode.com/paper/?openreview=HJz6tiCqYm)
Data: [CIFAR-10C](https://paperswithcode.com/dataset/cifar-10c), [ImageNet-C](https://paperswithcode.com/dataset/imagenet-c), [ImageNet-P](https://paperswithcode.com/dataset/imagenet-p), [Tiny-ImageNet-C](https://paperswithcode.com/dataset/tiny-imagenet-c), [ImageNet](https://paperswithcode.com/dataset/imagenet), [Stylized ImageNet](https://paperswithcode.com/dataset/stylized-imagenet)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 5 code implementations](https://www.catalyzex.com/paper/arxiv:1807.01697/code)
22 Replies

Loading