Enhancing Model Robustness Against Noisy Labels via Kronecker Product Decomposition

TMLR Paper6518 Authors

15 Nov 2025 (modified: 03 Dec 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Deep learning models have made remarkable progress across various domains in recent years. These models heavily rely on large-scale datasets for training, and a noisy dataset can degrade the performance of the model. To train accurate deep learning models, it is crucial to develop training algorithms that are robust to noisy training data and outliers while ensuring high performance. In this work, we study the problem of model training under noisy labels/outputs and propose a method based on Kronecker product decomposition to improve robustness during training. The proposed method is easy to implement and can be readily combined with robust loss functions. We report results from experiments conducted on both classification and regression tasks in the presence of noisy labels/outputs. Our results demonstrate that our approach outperforms existing robust loss methods in terms of model performance.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Vidya_Muthukumar3
Submission Number: 6518
Loading