Expand and Merge: Continual Learning with the Guidance of Fixed Text Embedding Space

Published: 01 Jan 2024, Last Modified: 27 Jul 2025IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep neural networks lack the ability to sequentially learn from new data and adapt to new scenarios. In particular, after learning new data, neural networks will have a significant performance degradation on old knowledge. This phenomenon is known as catastrophic forgetting. To mitigate this issue, we propose expanding and merging additional parameters, encapsulated in a specially designed adapter layer, in the frozen pretrained vision encoder. Along with the newly added parameters, adapter scaling weights in each layer are also introduced to adaptively control the fusion of new and old knowledge. Additionally, the fixed embedding space of a pretrained text encoder is used to guide the continual learning of the vision encoder. Extensive experiments on three datasets demonstrate that the proposed method outperform current state-of-the-art methods. The source code is available at https://github.com/GiantJun/Expand_and_Merge.
Loading