CD-IMM: The Benefits of Domain-based Mixture Models in Bayesian Continual Learning

Published: 03 Apr 2024, Last Modified: 03 Apr 20241st CLAI UnconfEveryoneRevisionsBibTeX
Keywords: continual learning; bayesian methods; class-incremental learning; mixure models
TL;DR: we propose bayesian mixture of domains as a robust and general continual learning model
Abstract: Real-world streams of data are characterised by the continuous occurrence of new and old classes, possibly on novel domains. Bayesian non-parametric mixture models provide a natural solution for continual learning due to their ability to create new components on the fly when new data are observed. However, popular class-based and time-based mixtures are often tested on simplified streams (\eg class-incremental), where shortcuts can be exploited to infer drifts. We hypothesise that \emph{domain-based mixtures are more effective on natural streams}. Our proposed method, the CD-IMM, exemplifies this approach by learning an infinite mixture of domains for each class. We experiment on a natural scenario with a mix of class repetitions and novel domains to validate our hypothesis.
Submission Number: 9
Loading