Online Class-incremental Continual Learning with Maximum Entropy Memory Update

Published: 01 Jan 2024, Last Modified: 05 Aug 2025IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: A continual learning agent, which faces a never-ending stream of data, suffers from severe catastrophic forgetting. To prevent forgetting, memory-based methods have shown more effective performance by retaining fractional previous data in a fixed-size memory buffer to maintain the observed category information. Nevertheless, which samples should be kept in the memory buffer is still an open question, and existing methods rarely address this issue from the perspective of sample information. In this work, we contribute a concise yet effective memory update method, Maximum Entropy Memory Update (MEMU). MEMU retains the samples adjacent to the decision boundaries since we observe that these samples have higher entropy. To this end, we design an indicator to score each sample and retain higher-score samples in the buffer. Compared to the state-of-the-art benchmarks, the experiments demonstrate that MEMU improves performance on five data streams with three metrics in the online continual learning setting.
Loading