Unsupervised Class-Imbalanced Domain Adaptation With Pairwise Adversarial Training and Semantic Alignment

Published: 01 Jan 2024, Last Modified: 17 May 2025IEEE Trans. Circuits Syst. Video Technol. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Unsupervised domain adaptation (UDA) has become an appealing approach for knowledge transfer from a labeled source domain to an unlabeled target domain. However, when the classes in source and target domains are imbalanced, most existing UDA methods experience significant performance drop, as the decision boundary usually favors the majority classes. Some recent class-imbalanced domain adaptation (CDA) methods aim to tackle the challenge of biased label distribution by exploiting pseudo-labeled target samples during the training process. However, these methods suffer from the issues with unreliable pseudo labels and error accumulation during training. In this paper, we propose a pairwise adversarial training approach for class-imbalanced domain adaptation. Unlike conventional adversarial training in which the adversarial samples are obtained from the $\ell _{p}$ ball of the original samples, we generate adversarial samples from the interpolated line of the aligned pairwise samples from source and target domains. The pairwise adversarial training (PAT) is a novel data-augmentation method which can be integrated into existing unsupervised domain adaptation (UDA) models to tackle the CDA problem. Inspired by the noise injection, we also extend the pairwise adversarial training to noisy pairwise adversarial training (nPAT), in which the random noise is injected into the generation of the adversarial samples. In our study, we evaluate our proposed methods as well as the baselines on three major benchmark datasets, namely Office-Home, DomainNet and Office-31. For Office-Home and Office-31, we sample the data according to the Reversely-unbalanced Source and Unbalanced Target (RS-UT) protocol so that the class distribution can be imbalanced. The extensive experimental results show that UDA models integrated with our proposed nPAT can achieve prominent improvements on most tasks compared to the baseline methods as well as the state-of-the-art CDA methods. The average accuracy of our nPAT can achieve 66.56% and 80.22% on Office-Home and DomainNet, respectively, which are higher than that of the second-best methods. Besides, Experiments also show that our method is robust to the unreliability of the pseudo labels.
Loading