Relative Instance Credibility Inference for Learning with Noisy LabelsDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Abstract: The existence of noisy labels usually leads to the degradation of generalization and robustness of neural networks in supervised learning. In this paper, we propose to use a simple theoretically guaranteed sample selection framework as a plug-in module to handle noisy labels. Specifically, we re-purpose a sparse linear model with incidental parameters as a unified Relative Instance Credibility Inference (RICI) framework, which will detect and remove outliers in the forward pass of each mini-batch and use the remaining instances to train the network. The credibility of instances is measured by the sparsity of incidental parameters, which can be ranked among other instances within each mini-batch to get a relatively consistent training mini-batch. The proposed RICI framework yields two variants that enjoy superior performance on the symmetric and asymmetric noise settings, respectively. We prove that our RICI can theoretically recover the clean data. Experimental results on several benchmark datasets and a real-world noisy dataset show the effectiveness of our framework.
One-sentence Summary: A theoretically guaranteed plug-in sample selection framework for learning with noisy labels.
4 Replies

Loading