Keywords: Domain Generalization, Domain Adaptation, Representation Learning
TL;DR: The paper proposes two loss terms for unsupervised domain adaptation problem in computer vision
Abstract: Deep neural networks trained on labeled source-domain samples often experience significant performance drops when used on target domains with different data distributions. Some unsupervised domain adaptation methods (UDA) address this by explicitly aligning the source and target feature distributions; however, enforcing full alignment without target labels can misalign class semantics. We propose Source Knowledge Anchored Regularization (SKAR) for UDA. This unified end-to-end framework transfers discriminative source knowledge via a composite loss on the network outputs, without explicitly enforcing distributional alignment. Our loss comprises of: (1) an adaptation-loss minimizing the entropy on target predictions to boost model confidence by leveraging source domain knowledge; (2) a regularization-loss for penalizing the model when its predictions falls under a few classes, thereby preventing class collapse; (3) a self-supervised-loss enforcing agreement between two strong augmentations of each target sample; and (4) a fidelity loss for anchors learning the source labels while mitigating overfitting. A curriculum learning schedule is applied to gradually shift the optimization focus from source fidelity to target-oriented objectives. Our main contribution is to couple the adaptation and regularization terms; we demonstrate theoretically (via gradient analysis) and empirically (via ablation and hyperparameter studies, and t-SNE visualizations) that these terms interact synergistically. On the Office-Home, Office-31, and VisDA benchmarks, SKAR achieves state-of-the-art performance, while requiring no auxiliary networks.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 12672
Loading