Reciprocal Label Diffusion for Learning with Noisy Labels

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Learning with Noisy Labels
Abstract: Deep neural networks are susceptible to overfitting noisy labels, resulting in poor generalization. We propose Reciprocal Label Diffusion (RLD), a novel framework that leverages a mutual guidance mechanism between a label diffusion model and a prediction model to effectively learn from noisy labels. In RLD, the diffusion model is guided by the outputs of the prediction model to denoise corrupted labels through a forward and reverse diffusion process in the logit space, thus modeling and correcting label noise with standard diffusion distributions while enforcing instance-dependency. In turn, the prediction model is refined using the denoised labels produced by the diffusion model, enhancing its learning of accurate representations. This reciprocal interaction enables both models to iteratively enhance each other. To further improve robustness to label noise, we incorporate a contrastive denoising loss that enforces consistency across different data augmentations. Experimental results on benchmark datasets demonstrate that our approach outperforms state-of-the-art methods, achieving significant improvements in classification accuracy under various noise conditions. Our framework provides a robust solution for learning with noisy labels by exploiting the reciprocal interplay between diffusion and prediction models.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 12656
Loading