SemiNLL: A Framework of Noisy-Label Learning by Semi-Supervised LearningDownload PDF

28 May 2022, 11:36 (modified: 20 Aug 2022, 07:21)Accepted by TMLRReaders: Everyone
Abstract: Deep learning with noisy labels is a challenging task, which has received much attention from the machine learning and computer vision communities. Recent prominent methods that build on a specific sample selection (SS) strategy and a specific semi-supervised learning (SSL) model achieved state-of-the-art performance. Intuitively, better performance could be achieved if stronger SS strategies and SSL models are employed. Following this intuition, one might easily derive various effective noisy-label learning methods using different combinations of SS strategies and SSL models, which is, however, simply reinventing the wheel in essence. To prevent this problem, we propose SemiNLL, a versatile framework that investigates how to naturally combine different SS and SSL components based on their effects and efficiencies. We conduct a systematic and detailed analysis of the combinations of possible components based on our framework. Our framework can absorb various SS strategies and SSL backbones, utilizing their power to achieve promising performance. The instantiations of our framework demonstrate substantial improvements over state-of-the-art methods on benchmark-simulated and real-world datasets with noisy labels.
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: # Revision 1: We appreciate all reviewers for their insightful and constructive comments! We carefully addressed all the raised concerns and accordingly modified the paper. We highlight the modified parts in blue colour in the latest version, and here is a summary of the new changes for your convenience: ## Main Paper: - We revised the abstract by clarifying "natural combination" and "new state-of-the-art". - We revised the introduction to define noisy labels clearly. - We added **detailed analysis of the advantages** of mini-batch-wise SS over epoch-wise SS in Section 3.1. - We modified Section 5.2 to precisely describe the performance of DivideMix+ and GPL compared to other methods. - **[New experiments]** We added an **ablation study of training with two networks** in Section 6.5. - We added detailed discussions about the **broader impact concerns** in Section 6.6. ## Appendix: - We added a summary of the contents of the Appendix. - We added the description of the label transition matrix in Appendix C. - We added the description of GMM and SPD in Appendix D. - **[New experiments]** We added results on more real-world datasets in Appendix E.1. - **[New experiments]** We added a **sensitivity analysis of the batch size** of mini-batch SS in Appendix E.2. - **[New experiments]** We added results on more model architectures in Appendix E.3. - **[New experiments]** We compare GPL (mini-batch) and GPL (epoch) in Appendix E.4. - We added detailed discussions about the **limitations of SS and SSL components** in Appendix F. - **[New experiments]** We conducted a **thorough comparison between different components** instantiated by our framework and gave **recommendations** for how to **naturally** choose and combine them in Appendix G. # Revision 2: We would once again like to thank AE and all the reviewers for providing valuable feedback/suggestions for improving the manuscript so far. For the camera-ready version, we made some changes as instructed: - We proofread the entire paper and fixed grammatical errors and typos; - We placed the description of sample selection methods into the main paper; - We made all the notations consistent throughout the entire paper.
Assigned Action Editor: ~Wei_Liu3
14 Replies