Deep Class-Conditional Gaussians for Continual LearningDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023NeurIPS 2022 Workshop DistShift PosterReaders: Everyone
Keywords: Contiual Learning, Lifelong Learning, Bayesian, Emprical Bayes, Distribution Shift
TL;DR: We present DeepCCG, an empirical Bayesian method to solve the problem in continual learning of how to use simple metric-based probabilistic models when the embedding function must be learnt online.
Abstract: The current state of the art for continual learning with frozen, pre-trained embedding networks are simple probabilistic models defined over the embedding space, for example class conditional Gaussians. As yet, in the task-incremental online setting, it has been an open question how to extend these methods to when the embedding function has to be learned from scratch. In this paper, we propose DeepCCG, an empirical Bayesian method which learns online both a class conditional Gaussian model and an embedding function. The learning process can be interpreted as using a variant of experience replay, known to be effective in continual learning. As part of our framework, we decide which examples to store by selecting the subset that minimises the KL divergence between the true posterior and the posterior induced by the subset. We demonstrate performance task-incremental online settings, including those with overlapping tasks. Our method outperforms all other methods, including several other replay-based methods.
1 Reply