Mixture of Basis for Interpretable Continual Learning with Distribution ShiftsDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: continual learning, lifelong learning, distribution shift, interpretable learning, semi-supervised learning
TL;DR: We develop a novel continual learning algorithm, Mixture of Basis models (MoB), that constructs a dynamic, task-dependent, mixture of interpretable models that outperforms other continual learning algorithms on several, diverse problem domains.
Abstract: Continual learning in environments with shifting data distributions is a challenging problem with several real-world applications. In this paper we consider settings in which the data distribution (i.e., task) shifts abruptly and the timing of these shifts are not known. Furthermore, we consider a $\textit{semi-supervised task-agnostic}$ setting in which the learning algorithm has access to both task-segmented and unsegmented data for offline training. We propose a novel approach called $\textit{Mixture of Basis}$ models $\textit{(MoB)}$ for addressing this problem setting. The core idea is to learn a small set of $\textit{basis models}$ and to construct a dynamic, task-dependent mixture of the models to predict for the current task. We also propose a new methodology to detect observations that are out-of-distribution with respect to the existing basis models and to instantiate new models as needed. We develop novel problem domains for regression tasks, evaluate MoB and other continual learning algorithms on these, and show that MoB attains better prediction error in nearly every case while using fewer models than other multiple-model approaches. We analyze latent task representations learned by MoB alongside the tasks themselves, using both qualitative and quantitative measures, to show that the learned latent task representations can be interpretably linked to the structure of the task space.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
5 Replies

Loading