Optimizing Class Distribution in Memory for Multi-Label Continual LearningDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: online continual learning
Abstract: Continual learning, which tries to learn from a data stream with non-stationary distribution, is an important yet challenging problem. One of the most effective ways to solve this problem is replay-based methods, in which a replay buffer called memory is maintained to keep a small part of past samples and the model rehearses these samples to keep its performance on old distribution when learning on new distribution. Most existing replay-based methods focus on single-label problems in which each sample in the data stream has only one label. But many real applications are multi-label problems in which each sample may have more than one label. To the best of our knowledge, there exists only one method, called partition reservoir sampling (PRS), for multi-label continual learning problems. PRS suffers from low speed due to its complicated process. In this paper, we propose a novel method, called optimizing class distribution in memory (OCDM), for multi-label continual learning. OCDM formulates the memory update mechanism as an optimization problem and updates the memory by solving this problem. Experiments on two widely used multi-label datasets show that OCDM outperforms other state-of-the-art methods including PRS in terms of accuracy, and its speed is also much faster than PRS.
One-sentence Summary: We consider online multi-label continual learning and propose a method to control the class distribution in memory.
Supplementary Material: zip
5 Replies

Loading