Robust LLM Unlearning with MUDMAN: Meta-Unlearning with Disruption Masking And Normalization

ACL ARR 2025 February Submission4577 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Language models can retain dangerous knowledge and skills even after extensive safety fine-tuning, posing both misuse and misalignment risks. Recent studies show that even specialized unlearning methods can be easily reversed. To address this, we systematically evaluate many existing and novel components of unlearning methods and identify ones crucial for irreversible unlearning. We introduce Disruption Masking, a technique in which we only allow updating weights, where the signs of the unlearning gradient and the retaining gradient are the same. This ensures all updates are non-disruptive. Additionally, we identify the need for normalizing the unlearning gradients, and also confirm the usefulness of meta-learning. We combine these insights into MUDMAN (Meta-Unlearning with Disruption Masking and Normalization) and validate its effectiveness at preventing the recovery of dangerous capabilities. Our results show that MUDMAN significantly outperforms the prior TAR method, setting a new state-of-the-art for robust unlearning.
Paper Type: Short
Research Area: Machine Learning for NLP
Research Area Keywords: meta learning, machine learning for NLP, language modeling, fine-tuning, security and privacy, red teaming, robustness, adversarial attacks
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 4577
Loading