Domain-Specific Risk Minimization for Out-of-Distribution GeneralizationDownload PDF

22 Sept 2022 (modified: 12 Mar 2024)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Out-of-Distribution Generalization, adaptivity gap, hypothesis space enhancement
TL;DR: In this paper, we develop a new generalization bound that is independent of hypothesis space choice and measure the adaptivity gap directly. Two test-time adaptation methods are then proposed inspiried by the bound.
Abstract: Recent domain generalization (DG) approaches typically use the classifier trained on source domains for inference on the unseen target domain. However, such a classifier can be arbitrarily far from the optimal one for the target domain, induced by a gap termed ``adaptivity gap ''. Without exploiting the domain information from the unseen test samples, adaptivity gap estimation and minimization are intractable, which hinders us to robustify a model to any unknown distribution. In this paper, we first establish a generalization bound that naturally considers the adaptivity gap. Our bound motivates two strategies to reduce the gap: the first one is ensembling multiple classifiers and thus enriching the hypothesis space, and the other one is adapting model parameters by online target samples. We thus propose Domain-specific Risk Minimization (DRM) for better domain generalization. During training, DRM models the distribution of different source domains separately; during test, DRM combines classifiers dynamically for different target samples and each arriving unlabeled target sample will be used to retrain our model. Extensive experiments demonstrate the effectiveness of the proposed DRM for domain generalization with the following advantages: 1) it significantly outperforms competitive baselines on different distributional shift settings; 2) it enables either comparable or superior accuracies on all training domains compared to vanilla empirical risk minimization (ERM); 3) it remains very simple and efficient during training, and 4) it is complementary to invariant learning approaches.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Supplementary Material: zip
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2208.08661/code)
6 Replies

Loading