Deep Learning from Noisy Labels via Robust Nonnegative Matrix Factorization-Based Design

Published: 01 Jan 2023, Last Modified: 05 Oct 2024CAMSAP 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep neural networks (DNN) heavily rely on labeled data for supervised training. However, acquiring accurate labels is often a challenging task. Moreover, DNNs easily overfit to noisy labels, hindering the generalization ability. Modeling the label noise using a “confusion matrix” is a widely adopted strategy under such circumstances. A recent work dealt with this problem using a regularizer that is reminiscent of minimum-volume enclosing simplex (MVES)-based matrix factorization. MVES is known for its identifiability of the latent factors, which in turn helps accurately estimate the confusion matrix and rectify its negative effects when training DNNs. However, MVES is highly sensitive to outliers due to its geometric nature. To overcome this limitation, we take insight from the robustification of MVES in the literature to come up with an outlier-resilient noisy label learning criterion. Consequently, when some data samples deviate from the model assumptions, the proposed criterion automatically downweights such outlying data, thereby steering DNN towards identifying the correct model parameters. Our experiment results provide support for the effectiveness of the proposed criterion.
Loading