Discrepancy-Optimal Meta-Learning for Domain GeneralizationDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Domain generalization Meta-learning Transfer learning Generalization Bound
Abstract: This work attempts to tackle the problem of domain generalization (DG) via learning to reduce domain shift with an episodic training procedure. In particular, we measure the domain shift with $\mathcal{Y}$-discrepancy and learn to optimize $\mathcal{Y}$-discrepancy between the unseen target domain and source domains only using source-domain samples. Theoretically, we give a PAC-style generalization bound for discrepancy-optimal meta-learning and further make comparisons with other DG bounds including ERM and domain-invariant learning. The theoretical analyses show that there is a tradeoff between classification performance and computational complexity for discrepancy-optimal meta-learning. The theoretical results also shed light on a bilevel optimization algorithm for DG. Empirically, we evaluate the algorithm with DomainBed and achieves state-of-the-art results on two DG benchmarks.
One-sentence Summary: This paper tackles the problem of domain generalization via discrepancy-optimal meta-learning from both theoretical and empirical perspectives.
6 Replies

Loading