Towards LLM Unlearning Resilient to Relearning Attacks: A Sharpness-Aware Minimization Perspective and Beyond
Abstract: The LLM unlearning technique has recently been introduced to comply with data regulations and address the safety and ethical concerns of LLMs by removing the undesired data-model influence.
However, state-of-the-art unlearning methods face a critical vulnerability: they are susceptible to ``relearning'' the removed information from a small number of forget data points, known as relearning attacks. In this paper, we systematically investigate how to make unlearned models robust against such attacks. For the first time, we establish a connection between robust unlearning and sharpness-aware minimization (SAM) through a unified robust optimization framework, in an analogy to adversarial training designed to defend against adversarial attacks. Our analysis for SAM reveals that smoothness optimization plays a pivotal role in mitigating relearning attacks. Thus, we further explore diverse smoothing strategies to enhance unlearning robustness. Extensive experiments on benchmark datasets, including WMDP and MUSE, demonstrate that SAM and other smoothness optimization approaches consistently improve the resistance of LLM unlearning to relearning attacks. Notably, smoothness-enhanced unlearning also helps defend against (input-level) jailbreaking attacks, broadening our proposal's impact in robustifying LLM unlearning. Codes are available at https://github.com/OPTML-Group/Unlearn-Smooth.
Lay Summary: Large language models (LLMs) can be taught to “forget” certain data to meet legal or ethical needs. But current methods have a flaw: if the model sees just a small part of the old data again, it might accidentally relearn it — a serious risk called a relearning attack.
Our research shows that a smarter training approach, called sharpness-aware minimization (SAM), can make forgetting more reliable. It helps the model stay stable and less likely to pick up unwanted information again.
We tested this on real benchmarks and found that it not only stops relearning but also protects against other attacks. Codes are available at https://github.com/OPTML-Group/Unlearn-Smooth.
Link To Code: https://github.com/OPTML-Group/Unlearn-Smooth
Primary Area: Deep Learning->Large Language Models
Keywords: Large Language Model, Machine Unlearning, Robustness, Relearning Attack
Submission Number: 8161
Loading