UMU-Bench: Closing the Modality Gap in Multimodal Unlearning Evaluation

Published: 18 Sept 2025, Last Modified: 30 Oct 2025NeurIPS 2025 Datasets and Benchmarks Track posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multimodal Unlearning Benchmark; Multimodal Large Language Model; Modality Alignment;
Abstract: Although Multimodal Large Language Models (MLLMs) have advanced numerous fields, their training on extensive multimodal datasets introduces significant privacy concerns, prompting the necessity for efficient unlearning methods. However, current multimodal unlearning approaches often directly adapt techniques from unimodal contexts, largely overlooking the critical issue of modality alignment, i.e., consistently removing knowledge across both unimodal and multimodal settings. To close this gap, we introduce UMU-bench, a unified benchmark specifically targeting modality misalignment in multimodal unlearning. UMU-bench consists of a meticulously curated dataset featuring 653 individual profiles, each described with both unimodal and multimodal knowledge. Additionally, novel tasks and evaluation metrics focusing on modality alignment are introduced, facilitating a comprehensive analysis of unimodal and multimodal unlearning effectiveness. Through extensive experimentation with state-of-the-art unlearning algorithms on UMU-bench, we demonstrate prevalent modality misalignment issues in existing methods. These findings underscore the critical need for novel multimodal unlearning approaches explicitly considering modality alignment.
Croissant File: json
Dataset URL: https://huggingface.co/datasets/linbojunzi/UMU-bench
Code URL: https://github.com/QDRhhhh/UMU-bench
Primary Area: Social and economic aspects of datasets and benchmarks in machine learning (e.g., fairness, interpretability, human-AI interaction, privacy, safety, strategic behavior)
Flagged For Ethics Review: true
Submission Number: 1573
Loading