Learning from Noisy Labels via Meta Credible Label ElicitationDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 05 Nov 2023ICIP 2022Readers: Everyone
Abstract: Deep neural networks (DNNS) can easily overfit to noisy data, which leads to a significant degradation of performance. Previous efforts are primarily made by label correction or sample selection to alleviate supervision problem. To distinguish between noisy labels and clean labels, we propose a meta-learning framework which could gradually elicit credible labels via the meta-gradient descent step under the guidance of potentially non-noisy samples. Specifically, by exploiting the topological information of feature space, we can automatically estimate label confidence with a meta-learner. An iterative procedure is designed to select the most trustworthy noisy-labeled instances to generate pseudo labels. Then we train DNNs with pseudo supervision and original noisy super vision, which learns sufficiency and robustness properties in a joint learning objective. Experimental results on benchmark classification datasets show the superiority of our approach against the state-of-the-art methods.
0 Replies

Loading