Domain-invariant Feature Exploration for Domain Generalization

Published: 04 Aug 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Deep learning has achieved great success in the past few years. However, the performance of deep learning is likely to impede in face of non-IID situations. Domain generalization (DG) enables a model to generalize to an unseen test distribution, i.e., to learn domain-invariant representations. In this paper, we argue that domain-invariant features should be originating from both internal and mutual sides. Internal invariance means that the features can be learned with a single domain and the features capture intrinsic semantics of data, i.e., the property within a domain, which is agnostic to other domains. Mutual invariance means that the features can be learned with multiple domains (cross-domain) and the features contain common information, i.e., the transferable features w.r.t. other domains. We then propose DIFEX for Domain-Invariant Feature EXploration. DIFEX employs a knowledge distillation framework to capture the high-level Fourier phase as the internally-invariant features and learn cross-domain correlation alignment as the mutually-invariant features. We further design an exploration loss to increase the feature diversity for better generalization. Extensive experiments on both time-series and visual benchmarks demonstrate that the proposed DIFEX achieves state-of-the-art performance.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Dear ACs and reviewers, Thank you for your constructive feedback! Now we upload the camera-ready revision of the paper according to your comments, and we will publicize our code.
Code: https://github.com/jindongwang/transferlearning/tree/master/code/DeepDG
Assigned Action Editor: ~Mingming_Gong1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 108
Loading