Keywords: Machine Unlearning, Benchmark, Multimodal Learning
TL;DR: We propose the first comprehensive multitask multimodal benchmark for machine unlearning (MU), including limitedly studied tasks and modalities, together with a series of base models to standardize MU evaluation.
Abstract: Recent advancements in Machine Unlearning (MU) have introduced solutions to selectively remove certain training samples, such as those with outdated or sensitive information, from trained models. Despite these advancements, evaluation of MU methods have been inconsistent, employing different trained models and architectures, and sample removal strategies, which hampers accurate comparison. In addition, prior MU approaches have mainly focused on {\em singular} tasks or modalities, which is not comprehensive. To address these limitations, we develop \method, the first comprehensive benchmark for MU that \emph{(i) unifies the sets of deleted samples and trained models}, and \emph{(ii) provides broad coverage of tasks and data modalities}, including previously unexplored domains such as speech and video classification. Our evaluation show that RandLabel and SalUn are the most effective general MU approaches on MU-Bench, and BadT and SCRUB are capable of achieving random performance on the deletion set. We analyze several under-investigated aspects of unlearning, including scalability, the impacts of parameter-efficient fine-tuning and curriculum learning, and susceptibility to dataset biases. MU-Bench provides an easy-to-use package that includes dataset splits, models, and implementations, together with a leader board to enable unified and scalable MU research.
Submission Number: 136
Loading