Robustness May be More Brittle than We Think under Different Degrees of Distribution Shifts

Published: 28 Oct 2023, Last Modified: 02 Apr 2024DistShift 2023 PosterEveryoneRevisionsBibTeX
Keywords: out-of-distribution generalization, robustness, distribution shift, CLIP
Abstract: Out-of-distribution (OOD) generalization is a complicated problem due to the idiosyncrasies of possible distribution shifts between training and test domains. Most benchmarks employ diverse datasets to address the issue; however, the degree of the distribution shift between the training domains and the test domains of each dataset remains largely fixed. Our study delves into a more nuanced evaluation setting that covers a broad range of shift degrees. We show that the robustness of neural networks can be quite brittle and inconsistent under different shift degrees, and therefore one should be more cautious in drawing conclusions from evaluations under a limited set of degrees. In addition, we find that CLIP, a representative of vision-language foundation models, can be sensitive to even minute distribution shifts of novel downstream tasks. This suggests that while pre-training may improve downstream in-distribution performance, it could have minimal or even adverse effects on generalization in certain OOD scenarios of the downstream task. A longer version of this paper can be found at https://arxiv.org/abs/2310.06622.
Submission Number: 50
Loading