Chameleon Sampling: Diverse and Pure Example Selection for Online Continual Learning with Noisy LabelsDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Continual Learning, Robust Learning, Noisy Labels, Label Noise
Abstract: AI models suffer from continuously changing data distribution and noisy labels when applied to most real-world problems. Although many solutions have addressed issues for each problem of continual learning or noisy label, tackling both issues is of importance and yet underexplored. Here, we address the task of online continual learning with noisy labels, which is a more realistic, practical, and challenging continual learning setup by assuming ground-truth labels may be noisy. Specifically, we argue the importance of both diversity and purity of examples in the episodic memory of continual learning models. To balance diversity and purity in the memory, we propose to combine a novel memory management strategy and robust learning. Specifically, we propose a metric to balance the trade-off between diversity and purity in the episodic memory with noisy labels. We then refurbish or apply unsupervised learning by splitting noisy examples into multiple groups using the Gaussian mixture model for addressing label noise. We validate our approach on four real-world or synthetic benchmark datasets, including two CIFARs, Clothing1M, and mini-WebVision, demonstrate significant improvements over representative methods on this challenging task set-up.
One-sentence Summary: We address the task of online continual learning with noisy labels.
5 Replies

Loading