Keywords: MLLMs, Unlearning for Language Models, Continual Learning, MoE
Abstract: Multimodal large language models (MLLMs) are trained on massive multimodal data, making data unlearning increasingly important as data owners may request the removal of specific content. In practice, these requests often arrive sequentially over time, creating the problem of *MLLM Lifelong Unlearning*.
However, existing benchmarks have not considered the MLLM lifelong unlearning scenario.
To study this problem, we introduce MLUBench, a comprehensive benchmark for assessing the performance of unlearning methods under MLLM lifelong unlearning. MLUBench comprises 127 entities of 9 classes and covers sequential unlearning requests.
We evaluate existing unlearning methods and find that sequential unlearning severely degrades model utility and forget quality.
To address this challenge, we propose an efficient method called LUMoE, which leverages switchable LoRA adapters through a gate module, eliminating the need for incremental training.
Experiments demonstrate that LUMoE significantly outperforms baselines in both model utility and forget quality without degradation.
Source code and the MLUBench dataset are presented in this anonymous [URL](https://anonymous.4open.science/r/Lifelong_Unlearning_main-72EC/).
Primary Area: datasets and benchmarks
Submission Number: 12417
Loading