Mixture of Basis for Interpretable Continual Learning with Distribution ShiftsDownload PDF

Published: 02 Dec 2021, Last Modified: 05 May 2023NeurIPS 2021 Workshop DistShift PosterReaders: Everyone
Keywords: Continual learning, Distribution Shift
TL;DR: We develop a method - MoB to learn a small set of basis models and construct a dynamic, task-dependent mixture of the models to predict for the current task.
Abstract: Continual learning in environments with shifting data distributions is a challenging problem with several real-world applications. In this paper we consider settings in which the data distribution (task) shifts abruptly and the timing of these shifts are not known. Furthermore, we consider a $\textit{semi-supervised task-agnostic}$ setting in which the learning algorithm has access to both task-segmented and unsegmented data for offline training. We propose a new approach for this problem setting - Mixture of Basis models (MoB). The core idea is to learn a small set of basis models and construct a dynamic, task-dependent mixture of the models to predict for the current task. We also propose a new methodology to detect observations that are out-of-distribution with respect to the existing basis models and instantiate new models. We test our approach in multiple domains and show that it achieves better prediction error compared to existing methods in most cases, while using fewer models. Moreover, we analyze the latent task representations learned by MoB to show that similar tasks tend to cluster together in the latent space and that the latent representation shifts at the task boundaries when the tasks are dissimilar.
1 Reply

Loading