Abstract: Unsupervised domain adaptation (UDA) augment model performance with only accessible annotations from the source domain and unlabeled data from the target domain. Existing state-of-the-art UDA models learn domain-invariant representations across domains and evaluate primarily on class-imbalanced data. In this work, we propose an unsupervised domain adaptation approach via reinforcement learning that jointly leverages both label prediction, domain, and imbalanced labels across domains. We experiment with the text classification task for its easily accessible datasets and compare the proposed method with five baselines. Experiments on three datasets prove that our proposed method can effectively learn robust domain-invariant representations and successfully adapt text classifiers over domains and imbalanced classes.
Paper Type: long
0 Replies
Loading