Conditional Input Gated Low-Rank Perturbations for Continual LearningDownload PDF

Anonymous

23 Nov 2020 (modified: 05 May 2023)AABI 2021 Symposium Blind SubmissionReaders: Everyone
Keywords: continual learning, perturbations, discriminative models, data driven, catastrophic forgetting
TL;DR: We overcome catastrophic forgetting by data-driven network expansion
Abstract: We address the problem of learning convolution neural networks (CNN) in the continual setting when tasks arrive sequentially, and only the data of the current task is available. In this setting CNNs are prone to reduce their quality on all old task drastically. In this work, we extend the idea of the \citep{abati} of data-conditional expansion of the CNN architecture. We propose to use low-rank and hence more weak additive perturbations for CNN filters, which is enough due to the composition structure of the CNN layers. Such low-rank adaptation modules allow us to reduce computational costs and promote sparseness in the adaptation of CNN to new tasks. We validate our approach empirically on the split MNIST and CIFAR10 tasks.
1 Reply

Loading