Adversarial Mixup Synthesis Training for Unsupervised Domain AdaptationDownload PDFOpen Website

Published: 01 Jan 2020, Last Modified: 13 May 2023ICASSP 2020Readers: Everyone
Abstract: Domain adversarial training is a popular approach for Unsupervised Domain Adaptation (DA). However, the transferability of adversarial training framework may drop greatly on the adaptation tasks with a large distribution divergence between source and target domains. In this paper, we propose a new approach termed Adversarial Mixup Synthesis Training (AMST) to alleviate the issue. The AMST augments the training with synthesis samples by linearly interpolating between pairs of hidden representations and their domain labels. By this means, AMST encourages the model to make consistency domain prediction less confidently on interpolations points, which learn domain-specific representations with fewer directions of variance. Based on the previous work, we conduct a theoretical analysis on this phenomenon under ideal conditions and show that AMST could improve generalization ability. Finally, experiments on benchmark dataset demonstrate the effectiveness and practicability of AMST. We will publicly release our code on github soon.
0 Replies

Loading