An Empirical Investigation of Domain Generalization with Empirical Risk MinimizersDownload PDF

21 May 2021, 20:50 (modified: 26 Oct 2021, 18:23)NeurIPS 2021 PosterReaders: Everyone
Keywords: Domain Generalization, Empirical Risk, Generalization Measures, Deep Learning, OOD, Implicit Bias
TL;DR: Explaining and Understanding Domain Adaptation Capabilities of Empirical Risk Minimization
Abstract: Recent work demonstrates that deep neural networks trained using Empirical Risk Minimization (ERM) can generalize under distribution shift, outperforming specialized training algorithms for domain generalization. The goal of this paper is to further understand this phenomenon. In particular, we study the extent to which the seminal domain adaptation theory of Ben-David et al. (2007) explains the performance of ERMs. Perhaps surprisingly, we find that this theory does not provide a tight explanation of the out-of-domain generalization observed across a large number of ERM models trained on three popular domain generalization datasets. This motivates us to investigate other possible measures—that, however, lack theory—which could explain generalization in this setting. Our investigation reveals that measures relating to the Fisher information, predictive entropy, and maximum mean discrepancy are good predictors of the out-of-distribution generalization of ERM models. We hope that our work helps galvanize the community towards building a better understanding of when deep networks trained with ERM generalize out-of-distribution.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Code: zip
12 Replies