Self-Training: A Survey

TMLR Paper2045 Authors

12 Jan 2024 (modified: 27 Mar 2024)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: Semi-supervised algorithms aim to learn prediction functions from a small set of labeled training set and a large set of unlabeled observations. Because these approaches are relevant in many applications, they have received a lot of interest in both academia and industry. Among the existing techniques, self-training methods have undoubtedly attracted greater attention in recent years. These models are designed to find the decision boundary on low density regions without making additional assumptions about the data distribution, and use the unsigned output score of a learned classifier, or its margin, as an indicator of confidence. The working principle of self-training algorithms is to learn a classifier iteratively by assigning pseudo-labels to the set of unlabeled training samples with a margin greater than a certain threshold. The pseudo-labeled examples are then used to enrich the labeled training data and to train a new classifier in conjunction with the labeled training set. In this paper, we present self-training methods for binary and multi-class classification as well as their variants and two related approaches, namely consistency-based approaches and transductive learning. We also provide brief descriptions of self-supervised learning and reinforced self-training, two distinct approaches despite their similar names. Finally, we present the most popular applications where self-training is employed. For pseudo-labeling, fixed thresholds usually lead to subpar results, highlighting the significance of dynamic thresholding for best results. Moreover, improving pseudo-label noise enhances generalization and class differentiation. The performance is also impacted by augmenting initial labeled training samples. These findings highlight the complex interplay in self-training efficacy between threshold selection, noise control, and labeled training size. They emphasize the need for meticulous parameter tuning and data preprocessing to fully exploit semi-supervised learning's potential and pave the way for future research in refining methodologies and expanding applicability across domains. To the best of our knowledge, this is the first thorough and complete survey on self-training.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: We express our gratitude to the reviewers for their insightful comments, which have greatly contributed to enhancing the content and readability of the paper. We have carefully considered all of their concerns and addressed them in our responses and in the revised version of the paper.
Assigned Action Editor: ~Gang_Niu1
Submission Number: 2045
Loading