Keywords: computer vision, learning with noisy labels, robustness
Abstract: Recent studies indicate that deep neural networks degrade in generalization performance under noisy supervision. Existing methods focus on isolating clean subsets or correcting noisy labels, facing limitations such as high computational costs, heavy hyperparameter tuning process, and coarse-grained optimization. To address these challenges, we propose a novel two-stage noisy learning framework that enables instance-level optimization through a dynamically weighted loss function, avoiding hyperparameter tuning. To obtain stable and accurate information about noise modeling, we introduce a simple yet effective metric, termed $\textit{wrong event}$, which dynamically models the cleanliness and difficulty of individual samples while maintaining computational costs. Our framework first collects $\textit{wrong event}$ information and builds a strong base model. Then we perform noise-robust training on the base model, using a probabilistic model to handle the $\textit{wrong event}$ information of samples. Experiments on six synthetic and real-world LNL benchmarks demonstrate our method surpasses state-of-the-art methods in performance, achieves a nearly 75\% reduction in storage and computational time, strongly improving model scalability. Our code is available at https://github.com/iTheresaApocalypse/IDO.
Supplementary Material: zip
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 5937
Loading