Dynamic Mixture of Curriculum LoRA Experts for Continual Multimodal Instruction Tuning

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Continual multimodal instruction tuning is crucial for adapting Multimodal Large Language Models (MLLMs) to evolving tasks. However, most existing methods adopt a fixed architecture, struggling with adapting to new tasks due to static model capacity. We propose to evolve the architecture under parameter budgets for dynamic task adaptation, which remains unexplored and imposes two challenges: 1) task architecture conflict, where different tasks require varying layer-wise adaptations, and 2) modality imbalance, where different tasks rely unevenly on modalities, leading to unbalanced updates. To address these challenges, we propose a novel Dynamic Mixture of Curriculum LoRA Experts (D-MoLE) method, which automatically evolves MLLM's architecture with controlled parameter budgets to continually adapt to new tasks while retaining previously learned knowledge. Specifically, we propose a dynamic layer-wise expert allocator, which automatically allocates LoRA experts across layers to resolve architecture conflicts, and routes instructions layer-wisely to facilitate knowledge sharing among experts. Then, we propose a gradient-based inter-modal continual curriculum, which adjusts the update ratio of each module in MLLM based on the difficulty of each modality within the task to alleviate the modality imbalance problem. Extensive experiments show that D-MoLE significantly outperforms state-of-the-art baselines, achieving a 15 percent average improvement over the best baseline. To the best of our knowledge, this is the first study of continual learning for MLLMs from an architectural perspective.
Lay Summary: Modern AI models that process both language and images, such as those used in assistants that understand pictures, need to keep learning as new tasks appear. However, most existing methods use a fixed model structure and cannot easily adapt to new tasks without losing what they have already learned. Our research addresses this problem by proposing a way for the model to adjust its internal structure over time, while keeping the total number of parameters within a fixed budget. We found two main challenges. First, different tasks prefer to update different layers of the model. Second, some tasks rely more on language while others depend more on images, which creates an imbalance during training. To tackle this, we introduce a method called Dynamic Mixture of Curriculum LoRA Experts (D-MoLE). It selects which parts of the model to update based on the current task and balances updates across text and image components. Our experiments show that this approach outperforms existing baselines, helping the model continue learning effectively without forgetting earlier knowledge.
Primary Area: Deep Learning->Large Language Models
Keywords: Multimodal large language model, Continual learning, Parameter efficient tuning
Submission Number: 5611
Loading