TL;DR: a model-friendly and data-friendly memory-efficient replay methods based on tensor decomposition
Abstract: Class-Incremental Learning (CIL) has gained considerable attention due to its capacity to accommodate new classes during learning. Replay-based methods demonstrate state-of-the-art performance in CIL but suffer from high memory consumption to save a set of old exemplars for revisiting. To address this challenge, many memory-efficient replay methods have been developed by exploiting image compression techniques. However, the gains are often bittersweet when pixel-level compression methods are used. Here, we present a simple yet efficient approach that employs tensor decomposition to address these limitations. This method fully exploits the low intrinsic dimensionality and pixel correlation of images to achieve high compression efficiency while preserving sufficient discriminative information, significantly enhancing performance. We also introduce a hybrid exemplar selection strategy to improve the representativeness and diversity of stored exemplars. Extensive experiments across datasets with varying resolutions consistently demonstrate that our approach substantially boosts the performance of baseline methods, showcasing strong generalization and robustness.
Lay Summary: Incremental learning aims to enable our AI systems to learn new knowledge from dynamic data streams, but this often leads to them severely forgetting past knowledge. Inspired by the human learning process, researchers retain a portion of the old knowledge for the system to ‘review’ when learning new knowledge. However, strict memory limits often cripple such replay methods.
We propose a memory-efficient replay approach based on tensor decomposition: instead of storing full images, we decompose each into a set of small factors via CP decomposition, drastically reducing storage needs while preserving reconstruction quality. We further introduce a two-stage exemplar selection strategy—first, choosing representative raw examples, then augmenting them with well-reconstructed compressed ones—to ensure both coverage and fidelity. Integrated into class-incremental learning frameworks, our method significantly boosts their performance, especially under tight memory constraints. These results hold across various image resolutions and tasks.
This plug-and-play solution makes continual learning practical for real-world systems, enabling them to learn robustly and efficiently with far less memory overhead.
Primary Area: General Machine Learning->Online Learning, Active Learning and Bandits
Keywords: Class Incremental Learning; Continual Learning
Submission Number: 6495
Loading