Prior-free Balanced Replay: Uncertainty-guided Reservoir Sampling for Long-Tailed Continual Learning

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Even in the era of large models, one of the well-known issues in continual learning (CL) is catastrophic forgetting, which is significantly challenging when the continual data stream exhibits a long-tailed distribution, termed as Long-Tailed Continual Learning (LTCL). Existing LTCL solutions generally require the label distribution of the data stream to achieve re-balance training. However, obtaining such prior information is often infeasible in real scenarios since the model should learn without pre-identifying the majority and minority classes. To this end, we propose a novel Prior-free Balanced Replay (PBR) framework to learn from long-tailed data stream with less forgetting. Concretely, motivated by our experimental finding that the minority classes are more likely to be forgotten due to the higher uncertainty, we newly design an uncertainty-guided reservoir sampling strategy to prioritize rehearsing minority data without using any prior information, which is based on the mutual dependence between the model and samples. Additionally, we incorporate two prior-free components to further reduce the forgetting issue: (1) Boundary constraint is to preserve uncertain boundary supporting samples for continually re-estimating task boundaries. (2) Prototype constraint is to maintain the consistency of learned class prototypes along with training. Our approach is evaluated on three standard long-tailed benchmarks, demonstrating superior performance to existing CL methods and previous SOTA LTCL approach in both task- and class-incremental learning settings, as well as ordered- and shuffled-LTCL settings.
Primary Subject Area: [Experience] Multimedia Applications
Relevance To Conference: Long-tailed continual learning (LTCL) is a subfield in machine learning that addresses the challenges faced by models when dealing with data distributions where certain classes have significantly fewer samples than others (a long-tail distribution), and when new classes or tasks are introduced sequentially over time (continual learning). This phenomenon is particularly relevant in multimedia and multimodal processing, where data often naturally follows such skewed distributions and there's a continuous inflow of diverse and complex data. In multimedia and multimodal processing, information can come from various sources like images, videos, audio, text, etc., each potentially exhibiting long-tailed distributions within their own modalities or across modalities. LTCL can contribute in several ways: 1. Handling Imbalanced Distributions. 2. Incremental Learning 3. Avoiding Catastrophic Forgetting 4. Multimodal Fusion 5. Resource Efficiency
Supplementary Material: zip
Submission Number: 2584
Loading