JoCaD: a joint training method by combining consistency and diversity

Published: 01 Jan 2024, Last Modified: 04 Nov 2024Multim. Tools Appl. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Noisy labels due to mistakes in manual labeling or data collecting are challenging for the expansion of deep neural network applications. Current robust network learning methods such as Decoupling, Co-teaching, and Joint Training with Co-Regularization are very promising for learning with noisy labels, yet the coordination between consistency and diversity is not fully considered which is crucial for the performance of the model. To tackle this issue, a novel robust learning paradigm called Joint training by combining Consistency and Diversity (JoCaD) is proposed in this paper. The JoCaD is devoted to maximize the prediction consistency of the networks while keeping enough diversity on their representation learning. Specifically, aiming to reconcile the relationship between consistency and diversity, an effective implementation is proposed which dynamically adjusts joint loss to boost the model learning with noisy labels. The extensive experimental results on MNIST, CIFAR-10, CIFAR-100, and Clothing1M demonstrate that our proposed JoCaD has better performance than some representative SOTA methods.
Loading