Robust Learning with Adaptive Sample Credibility ModelingDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: robust learning, label noise, divide-and-conquer
Abstract: Training deep neural network (DNN) with noisy labels is practically challenging since inaccurate labels severely degrade the generalization ability of DNN. Previous efforts tend to handle part or full data in a unified denoising flow to mitigate the noisy label problem, while they lack the consideration of intrinsic difference among difficulties of various noisy samples. In this paper, a novel and adaptive end-to-end robust learning method, called CREMA, is proposed. The insight behind is that the credibility of a training sample can be estimated by the joint distribution of its data-label pair, thus to roughly separate clean and noisy samples from original samples, which will be processed with different denoising process in a divide-and-conquer manner. For the clean set, we deliberately design a memory-based modulation scheme to dynamically adjust the contribution of each sample in terms of its historical credibility sequence during training, thus to alleviate the effect from potential hard noisy samples in clean set. Meanwhile, for those samples categorized into noisy set, we try to correct their labels in a selective manner to maximize data utilization and further boost performance. Extensive experiments on mainstream benchmarks, including synthetic (noisy versions of MNIST, CIFAR-10 and CIFAR-100) and real-world (Clothing1M and Animal-10N) noisy datasets demonstrate superiority of the proposed method.
One-sentence Summary: We propose CREMA, a end-to-end robust training method that modeling the sample credibility and process the noisy data in a divide-and-conquer manner.
4 Replies

Loading