Keywords: Continual Learning, Memory Replay, Sample Generation, Multivariate Gaussian Distribution, Expectation-Maximization, Local Adaptation
Abstract: Continual learning has recently become increasingly important with the development of deep learning technology. The memory-based rehearsal is one of the dominant methods: It samples data in a previous task, stores them in memory, and retrain them with the current task. However, since the whole data cannot be stored in fixed memory capacity, there is a problem to lose the knowledge of previous data. In this paper, we propose a method for storing and reproducing distributed representations of data for each class in memory. Data representation is categorized by class and converted into a multivariate Gaussian distribution, which is stored in memory in the form of means and variances. A generative algorithm regenerates the model of previous tasks to restore the data representation for the current task. In the inference process, local adaptation adjusts the model to the distributed representation of data that change as the number of tasks increases. Experiments with CIFAR10, CIFAR100, and tiny-ImageNet show the the performance improvements of 2.2%p, 5.01%p, and 3.44%p, respectively, compared to the state-of-the-art method of memory replay, confirming the effectiveness of the proposed method in data representation for memory replay.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
TL;DR: we propose a method for storing and reproducing distributed representations of data for each class in memory
5 Replies
Loading