Rethinking Multi-domain Generalization with A General Learning Objective

20 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Mult-domain generalization, domain generalization
Abstract: Multi-domain generalization (mDG) is universally aimed at diminishing the gap between training and testing distribution, which in turn facilitates the learning of a mapping from marginal distributions to labels. However, in the literature of mDG, a general learning objective paradigm is conspicuously missing, and the constraint of a static target's marginal distribution is often present. In this paper, we propose to leverage a $Y$-mapping $\psi$ to relax the constraint. We then rethink the learning objective for mDG and design a new general learning objective that can be used to interpret and analyze most existing mDG wisdom. This general objective is bifurcated into two synergistic amis: learning domain-independent conditional features, and maximizing a posterior. Explorations also extend to two effective regularization terms that incorporate prior information and suppress invalid causality, alleviating the issues that come with relaxed constraints. Inspired by the Generalized Jensen-Shannon Divergence, we contribute to deriving an upper bound for the domain alignment of domain-independent conditional features, disclosing that many previous mDG endeavors actually optimize partially the objective and thus lead to limited performance. As such, the general learning objective is simplified into four practical components and can be easily used in various tasks and different frameworks. Overall, our study proposes a general, robust, and flexible mechanism to handle complex domain shifts. Extensive empirical results indicate that the proposed objective with $Y$-mapping leads to substantially better mDG performance.
Supplementary Material: zip
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2465
Loading