Domain Generalization for Domain-Linked Classes

20 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: zip
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Domain Generalization, Fairness, Transfer Learning
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Examining and increasing the generalizability of domain-linked classes
Abstract: Domain generalization (DG) focuses on transferring domain-invariant knowledge from multiple source domains (available at train time) to an $\textit{a priori}$ unseen target domain(s). This task implicitly assumes that a class of interest is expressed in multiple source domains ($\textit{domain-shared}$), which helps break the spurious correlations between domain and class and enables domain-invariant learning. However, in real-world applications, classes may often be expressed only in a specific domain ($\textit{domain-linked}$), which leads to extremely poor generalization performance for these classes. In this work, we introduce this task to the community and develop an algorithm to learn generalizable representations for these domain-linked classes by transferring useful representations from domain-shared classes. Specifically, we propose a $\textbf{F}$air and c$\textbf{ON}$trastive feature-space regularization algorithm for $\textbf{D}$omain-linked DG, $\texttt{FOND}$. Rigorous and reproducible experiments with baselines across popular DG tasks demonstrate our method and its variants' ability to accomplish state-of-the-art DG results for domain-linked classes, given sufficient number of domain-shared classes. Complementary to these contributions, we develop theoretical insights for this task and practical insights for domain-linked class generalizability in real-world settings.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2856
Loading