Predicting Out-of-Domain Generalization with Neighborhood Invariance

Published: 18 Jun 2023, Last Modified: 18 Jun 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Developing and deploying machine learning models safely depends on the ability to char- acterize and compare their abilities to generalize to new environments. Although recent work has proposed a variety of methods that can directly predict or theoretically bound the generalization capacity of a model, they rely on strong assumptions such as matching train/test distributions and access to model gradients. In order to characterize generalization when these assumptions are not satisfied, we propose neighborhood invariance, a measure of a classifier’s output invariance in a local transformation neighborhood. Specifically, we sample a set of transformations and given an input test point, calculate the invariance as the largest fraction of transformed points classified into the same class. Crucially, our measure is simple to calculate, does not depend on the test point’s true label, makes no assumptions about the data distribution or model, and can be applied even in out-of-domain (OOD) settings where existing methods cannot, requiring only selecting a set of appropriate data transformations. In experiments on robustness benchmarks in image classification, sentiment analysis, and natural language inference, we demonstrate a strong and robust correlation between our neighborhood invariance measure and actual OOD generalization on over 4,600 models evaluated on over 100 train/test domain pairs.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: - Additional experiments evaluating 196 models from the ImageNet Testbed [1]. These models cover various architectures including DenseNet, EfficientNet, ResNeXt, and Vision Transformers, as well as linear probes on pre-trained CLIP embeddings and more. We evaluate these models on 7 datasets: standard domain shifts of ImageNetV2, ImageNet-Sketch, ImageNet-R, ObjectNet, YTBB, and ImageNet-Vid, as well as an adversarial shift of Imagenet-A. We present results in Table 2a). - We find that NI-RandAug slightly outperforms ATC methods on standard ImageNet domain shifts. On adversarial data, where ATC methods fail completely, our NI based methods maintain strong performance. These results further support the conclusions drawn from the smaller scale experiments on CI10 and Numbers datasets. - Additional ablations on the effects of robustness interventions (adversarial training, data augmentations, etc.) as well as additional data (contrastive pretraining, zero shot linear probes, etc.) by analyzing subsets of the models evaluated above. We find that, compared to standard training, additional robustness and data slightly degrade macro $\tau$ but increase $R^2$. However, compared to the performance on all models, NI-RandAug performs fairly consistently across all model subsets. - Standard deviations for all results in the main Table 2 added to the Appendix in Table 28. We find that in general, ATC methods exhibit much higher variance compared to NI based methods. For example, on ImageNet datasets shifts, NI-RandAug exhibits an $R^2$ variance of 0.091, whereas ATC methods exhibit a variance of 0.27, indicating our method both performs better and is more consistent across datasets. - Additional related work added on different notions of invariance as well as methods that analyze linear relationships between agreement, ID accuracy, and OOD accuracy. [1] Taori et al. Measuring Robustness to Natural Distribution Shifts in Image Classification. NeurIPS 2020.
Assigned Action Editor: ~Vincent_Dumoulin1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 904
Loading