Keywords: deep learning, large multimodal model, machine unlearning
TL;DR: We present the first LMM unlearning evaluation for pre-trained knowledge and sequential-request sustainability.
Abstract: Unlearning methods that enable models to “forget” have been studied in the context of privacy and copyright for LLMs/LMMs. However, evaluation for unlearning LMMs remains limited, as existing benchmarks primarily focus on single-step unlearning of fine-tuned knowledge. We introduce PULSE, a practical unlearning evaluation protocol for LMMs along two dimensions: (i) Pre-trained knowledge Unlearning and (ii) Long-term Sustainability Evaluation under sequential requests. Our evaluation of existing unlearning methods shows that while they often succeed in unlearning fine-tuned knowledge, they struggle to unlearn pre-trained knowledge. Furthermore, even when single-step unlearning appears effective, performance of unlearned model deteriorates under repeated unlearning. These findings highlight the need for new techniques that can selectively remove pre-trained content while preserving model capabilities across successive requests.
Submission Number: 44
Loading