Abstract: Highlights•Addressing a critical challenge in machine learning: training robust models on datasets that simultaneously suffer from long-tailed class imbalance and label noise, a common yet underexplored scenario in real-world applications.•Proposing RCKD - a framework integrating contrastive learning with diverse multi-expert knowledge distillation a framework integrating diverse multi-expert knowledge distillation and dual-mode contrastive representation learning for joint label refinement and representation enhancement.•Diverse peer networks generating complementary feature perspectives through self-attention-driven weight diversification, and knowledge distillation synchronizing logit calibration and feature disentanglement.•Dual-mode contrastive learning by unsupervised contrastive learning for pseudo-clean samples, capturing intrinsic data geometry in imbalance scenarios, and reweighted supervised contrastive learning for pseudo-noise samples, enforcing class-discriminative features.
External IDs:dblp:journals/kbs/LiZZLLW25
Loading