Pseudolabel Distillation with Adversarial Contrastive Learning for Semisupervised Domain Adaptation

Published: 01 Jan 2024, Last Modified: 16 Apr 2025ICME 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Semisupervised domain adaptation (SSDA) utilizes a few labeled target samples to learn the pseudolabels for the test target samples in classification tasks. However, the learned pseudolabels inevitably produce mistakes, and utilizing these incorrect pseudolabels in each training step causes negative effects to accumulate, degrading the performance of the models. To address these issues, in this paper, we propose a novel approach, pseudolabel distillation with adversarial contrastive learning (PDACL), for SSDA. Specifically, we utilize image alignment technique to learn uncertainty knowledge between global and local features and knowledge distillation to explore more useful knowledge to ensure the quality of the pseudolabels. A margin cosine loss is introduced to align different domains. We design an image joint discriminator and utilize multifilter techniques to retain the accuracy of the pseudolabels. Extensive experiments were conducted on several data benchmarks, and the results demonstrate that our proposed approach achieves advanced performance in SSDA.
Loading