Keywords: Continual Learning, Vision-Language Models, Parameter-Efficient Fine-Tuning, Null-Space Methods
Abstract: Pre-trained vision-language models (VLMs), such as CLIP, have demonstrated remarkable zero-shot generalization, enabling deployment in a wide range of real-world tasks without additional task-specific training.
However, in real deployment scenarios with evolving environments or emerging classes, these models inevitably face distributional shifts and novel tasks.
In such contexts, static zero-shot capabilities are insufficient, and there is a growing need for continual learning methods that allow models to adapt over time while avoiding catastrophic forgetting.
We introduce NuSA-CL (Null Space Adaptation for Continual Learning), a lightweight memory-free continual learning framework designed to address this challenge.
NuSA-CL employs low-rank adaptation and constrains task-specific weight updates to lie within an approximate null space of the model's current parameters.
This strategy minimizes interference with previously acquired knowledge, effectively preserving the zero-shot capabilities of the original model.
Unlike methods relying on replay buffers or costly distillation, NuSA-CL imposes minimal computational and memory overhead, making it practical for deployment in resource-constrained, real-world continual learning environments.
Experiments show that our framework not only effectively preserves zero-shot transfer capabilities but also achieves highly competitive performance on continual learning benchmarks.
These results position NuSA-CL as a practical and scalable solution for continually evolving zero-shot VLMs in real-world applications.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 17495
Loading