Continual Learning from the Perspective of CompressionDownload PDF

12 Jun 2020 (modified: 05 May 2023)LifelongML@ICML2020Readers: Everyone
Student First Author: Yes
TL;DR: Unifying VCL and generative replay in the compression framework and introducing a new Continual Learning method called Maximum Likelihood Mixture code.
Keywords: Continual Learning, Compression, MDL
Abstract: Connectionist models such as neural networks suffer from catastrophic forgetting. In this work, we study this problem from the perspective of information theory and define forgetting as the increase of description lengths of previous data when they are compressed with a sequentially learned model. In addition, we show that continual learning approaches based on variational posterior approximation and generative replay can be considered as approximations to two prequential coding methods in compression, namely, the Bayesian mixture code and maximum likelihood (ML) plug-in code. We compare these approaches in terms of both compression and forgetting and empirically study the reasons that limit the performance of continual learning methods based on variational posterior approximation. To address these limitations, we propose a new continual learning method that combines ML plug-in and Bayesian mixture codes.
0 Replies

Loading