Learning Invariant Representation with Consistency and Diversity for Semi-Supervised Source Hypothesis Transfer
Abstract: Semi-supervised Domain adaptation (SSDA) has shown promising results by leveraging unlabeled data and limited labeled samples in the target domain. However, accessibility to source data is hindered by data privacy concerns, giving rise to Semi-supervised Source Hypothesis Transfer (SSHT). Integrating the SSDA methods directly into SSHT tasks is straightforward but poses two significant challenges: i) The hypothesis (classifier) is no longer supervised by source labels, and relying on only a few labels may result in hypothesis collapse; ii) Trained source models often exhibit bias, making them susceptible to misclassifying samples from minority categories into majority ones. We examined the recent methods in the SSHT setting and observed variations in performance compared to SSDA. To address these challenges, we first mitigate model overfitting to target labeled data by promoting prediction consistency between two types of randomly augmented unlabeled data, thereby preventing training collapse. Additionally, we maintain both the prediction diversity and discriminability by leveraging unlabeled data. Experiments on SSHT tasks show that our method yields more stable and competitive results compared with state-of-the-art methods.
Loading