Abstract: Most efforts in unsupervised domain adaptation (UDA) focus on learning the domain-invariant representations between the two domains. However, such representations may still confuse two patterns due to the domain gap. Considering that semantic information is useful for the final task and domain information always indicates the discrepancy between two domains, to address this issue, we propose to decouple the representations of semantic features from domain features to reduce domain bias. Different from traditional methods, we adopt a simple but effective module with only one domain discriminator to decouple the representations, offering two benefits. Firstly, it eliminates the need for labeled sample pairs, making it more suitable for UDA. Secondly, without adversarial learning, our model can achieve a more stable training phase. Moreover, to further enhance the task-specific features, we employ a causal mechanism to separate semantic features related to causal factors from the overall feature representations. Specially, we utilize a dual-classifier strategy, where each classifier is fed with the entire features and the semantic features, respectively. By minimizing the discrepancy between the outputs of the two classifiers, the causal influence of the semantic features is accentuated. Experiments on several public datasets demonstrate the proposed model can outperform the state-of-the-art methods. Our code is available at: https://github.com/qzxRtY37/DRLC https://github.com/qzxRtY37/DRLC.
0 Replies
Loading