Leveraging low rank filters for efficient and knowledge-preserving lifelong learning

Published: 01 Nov 2023, Last Modified: 14 Nov 2024OpenReview Archive Direct UploadEveryoneCC BY 4.0
Abstract: We propose a low rank filter approximation based continual learning approach which decomposes convolution filters into compact basis filters and remixing coefficients. For lifelong learning, we keep the same basis filters to allow knowledge sharing, but add separate coefficients for each new task. Task specific feature maps are computed by a sequence of convolutions, first with shared basis filters and followed by the task specific coefficients. This method enables the model to preserve the previously learned knowledge, thus avoiding the problem of catastrophic forgetting. Additionally, choosing compact basis lets us get away with using a small number of basis filters which enables reduction in FLOPs and number of parameters in the model. To demonstrate efficiency of the proposed approach, we evaluate our model on a variety of datasets and network architectures. With Resnet18 based architecture, we report performance improvement on CIFAR100 with significantly low FLOPs and parameters as compared to other methods. For ImageNet our method achieves comparable performance to other recent methods with reduced FLOPs.
Loading