Disentangled Embedding through Style and Mutual Information for Domain Generalization

TMLR Paper4521 Authors

20 Mar 2025 (modified: 26 Mar 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Deep neural networks often experience performance degradation when faced with distributional shifts between training and testing data, a challenge referred to as domain shift. Domain Generalization (DG) addresses this issue by training models on multiple source domains, enabling the development of invariant representations that generalize to unseen distributions. While existing DG methods have achieved success by minimizing variations across source domains within a shared feature space, recent advances inspired by representation disentanglement have demonstrated improved performance by separating latent features into domain-specific and domain-invariant components. We propose two novel frameworks: Disentangled Embedding through Mutual Information (DETMI) and Disentangled Embedding through Style Information (DETSI). DETMI enforces disentanglement by employing a mutual information estimator, minimizing the mutual dependence between domain-agnostic and domain-specific embeddings. DETSI, on the other hand, achieves disentanglement through style extraction and perturbation, facilitating the learning of domain-invariant and domain-specific representations. Extensive experiments on the PACS, Office-Home, and VLCS datasets show that both frameworks outperform several state-of-the-art DG techniques.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Rémi_Flamary1
Submission Number: 4521
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview