Abstract: Large vision-language models (VLMs) have shown significant performance boost in various application domains. However, adopting them to deal with several sequentially encountered tasks has been limited because finetuning a VLM on a task normally leads to reducing its generalization power and the capacity of learning new tasks. Enabling using VLMs in multimodal continual learning (CL) settings can help to address such scenarios. Hence, we propose a novel prompt-based CL method for VLMs, namely $\textbf{Clu}$ster-based $\textbf{Mo}$dality Fusion Prompt (CluMo). Our approach addresses catastrophic forgetting through constructing modality-specific prompts using $k$-means clustering for selecting the best semantically matched prompt, which also enables benefiting from past experiences through forward transfer. Experiments on two benchmarks demonstrate that our method achieves SOTA against existing alternatives.
Paper Type: Long
Research Area: Multimodality and Language Grounding to Vision, Robotics and Beyond
Research Area Keywords: Visual Question Answering, Multimodality, continual learning
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 1800
Loading