Keywords: VLM Finetuning, Catastrophic Forgetting, Continual Learning
Abstract: *This paper does not propose a new method; rather, we find that simple adjustments of the fine-tuning recipes of vision language models (VLM) are sufficient to mitigate catastrophic forgetting.* Using visual question answering tasks, we design a 2×2 experimental framework to assess model performance across in-distribution and out-of-distribution image and text inputs. Our results show that appropriate regularization, such as constraining the number of trainable parameters or adopting a low learning rate, effectively prevents forgetting when dealing with out-of-distribution images. However, we uncover a distinct form of forgetting in settings with in-distribution images and out-of-distribution text. We attribute this forgetting as task-specific overfitting and address this issue by introducing a data-hybrid training strategy that combines datasets and tasks. Finally, we demonstrate that this approach naturally extends to continual learning, outperforming existing methods without the need for complex auxiliary mechanisms. In general, our findings challenge the prevailing assumptions by highlighting the inherent robustness of VLMs and providing practical guidelines for adapting them while preserving their general-purpose capabilities.
Submission Number: 77
Loading