DIFLN: Disentangled Domain-Invariant Feature Learning Networks for Domain Generalization

Published: 01 Jan 2023, Last Modified: 04 Nov 2024IEEE Trans. Cogn. Dev. Syst. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Domain generalization (DG) aims to learn a model that generalizes well to an unseen test distribution. Mainstream methods follow the domain-invariant representational learning philosophy to achieve this goal. However, due to the lack of priori knowledge to determine which features are domain specific and task-independent, and which features are domain invariant and task relevant, existing methods typically learn entangled representations, limiting their capacity to generalize to the distribution-shifted target domain. To address this issue, in this article, we propose novel disentangled domain-invariant feature learning networks (D2IFLN) to realize feature disentanglement and facilitate domain-invariant feature learning. Specifically, we introduce a semantic disentanglement network and a domain disentanglement network, disentangling the learned domain-invariant features from both domain-specific class-irrelevant features and domain-discriminative features. To avoid the semantic confusion in adversarial learning for domain-invariant feature learning, we further introduce a graph neural network to aggregate different domain semantic features during model training. Extensive experiments on three DG benchmarks show that the proposed D2IFLN performs better than the state of the art.
Loading