Keywords: Dataset Distillation
Abstract: Robust training under noisy labels remains a critical challenge in deep learning due to the risk of confirmation bias and overfitting in iterative correction pipelines. In this work, we propose a novel trajectory-based dataset distillation framework that jointly addresses noise suppression and knowledge preservation without requiring label correction or clean subsets. Our method introduces two complementary components: Selective Guidance Reweighting (SGR) and Teacher-Inspired Auxiliary Targets (TIAT). SGR improves teacher signal quality by integrating global forgetting patterns (via second-split forgetting) with local feature consistency (via KNN-based evaluation), forming a hybrid reweighting mechanism that prioritizes clean supervision. TIAT further enhances the learning capacity by injecting auxiliary guidance derived from intermediate teacher dynamics, ensuring internal consistency while reinforcing informative signals. Together, these strategies enable the distilled dataset to retain cleaner and richer knowledge representations under noisy supervision. The proposed framework is label-preserving, computationally efficient, and broadly applicable. Extensive experiments on benchmark datasets demonstrate consistent performance improvements over state-of-the-art dataset distillation methods across symmetric, asymmetric, and real-world noise scenarios.
Primary Area: optimization
Submission Number: 1208
Loading