Abstract: We establish rigorous benchmarks for visual perception robustness. Synthetic images such as ImageNet-C,
ImageNet-9, and Stylized ImageNet provide specific type
of evaluation over synthetic corruptions, backgrounds, and
textures, yet those robustness benchmarks are restricted
in specified variations and have low synthetic quality. In
this work, we introduce generative model as a data source
for synthesizing hard images that benchmark deep models’ robustness. Leveraging diffusion models, we are able
to generate images with more diversified backgrounds, textures, and materials than any prior work, where we term
this benchmark as ImageNet-D. Experimental results show
that ImageNet-D results in a significant accuracy drop to
a range of vision models, from the standard ResNet visual classifier to the latest foundation models like CLIP and
MiniGPT-4, significantly reducing their accuracy by up to
60%. Our work suggests that diffusion models can be an
effective source to test vision models. The code and dataset
are available at https://github.com/chenshuangzhang/imagenet_d.
Loading