UDANG: Unsupervised Domain Adaptation with Neural Gating for learning invariant representation of subspaces

ICLR 2026 Conference Submission17983 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: UDA, unsupervised domain adaptation, representation learning, image classification
Abstract: The key assumption of deep learning is that the data that the model will be tested on (target domain) are drawn from the same distribution as the data it was trained on (source domain). Breaking this assumption can lead to a significant drop in performance despite having similar underlying features between the source and target domains. Unsupervised Domain Adaptation (UDA) involves using unlabeled samples from the target domain, in addition to labeled samples from source domain, to train a model that can perform well on the target domain. Many existing UDA approaches rely on domain adversarial training (DAT) to reduce domain shift. Although effective, they do not explicitly disentangle the learned features into task-specific and domain-specific components. As a result, the features despite appearing to be domain invariant, may still contain domain-specific biases. To address this, we propose a novel method, UDA with Neural Gating (UDANG), that utilizes a dual adversarial objective to learn an adaptive gating which dynamically route each feature dimension to either the domain or task subspace. Using our strategy, networks have the ability to effectively disentangle task-specific features from domain-specific ones. We validated our approach in multiple datasets and network architectures for image classification, demonstrating strong adaptation performance while retaining the features for discerning the domain.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 17983
Loading