UES: An Ultra-expanded Semantic Space for Unsupervised Domain Adaptation

04 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Unsupervised Domain Adaptation, Transfer Learning, Margin loss
TL;DR: In UDA, the farther a feature is from the decision boundary, the better.
Abstract: Unsupervised Domain Adaptation (UDA) offers a promising solution to address label annotation costs and dataset bias by facilitating knowledge transfer from a label-rich source domain to a related but unlabeled target domain. While the FC+Softmax+Cross Entropy loss has become the de facto standard for classification under the IID assumption, its performance degrades significantly under UDA's non-IID setting, where target domain features frequently violate decision boundaries, resulting in inter-class confusion. To overcome this limitation, we propose an innovative Distance Margin-based Ultra-Expanded Space (UES) loss, which encourages features to occupy an expanded representation space, thereby maintaining a safer distance from decision boundaries. Designed as a plug-and-play regularization term, UES can be seamlessly integrated into various classification-based UDA frameworks, offering exceptional simplicity by requiring only a few lines of code and minimal hyperparameter tuning while reducing computational overhead. Extensive experiments demonstrate that our method achieves performance improvements in nearly all tested cross-domain tasks.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 1984
Loading