Robust Unlearning for Large Language Models

Published: 01 Jan 2025, Last Modified: 31 Jul 2025PAKDD (5) 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the rapid development of LLMs, we have witnessed intense competition among the major LLM products like ChatGPT, LLaMa, and Gemini. However, various issues (e.g. privacy leakage and copyright violation) of the training corpus still remain underexplored. For example, the Times sued OpenAI and Microsoft for infringing on its copyrights by using millions of its articles for training. From the perspective of LLM practitioners, handling such unintended privacy violations can be challenging. Previous work mainly approached the “unlearning" problem of LLMs via gradient information, while they mostly lacked theoretical guarantees. In this paper, we revisit the unlearning via the perspective of second-order information (Hessian). Our unlearning algorithms, inspired by the classic Newton update, are not only data-agnostic/model-agnostic but can also derive an upper bound for utility or privacy loss. Through a comprehensive evaluation with common NLP datasets and case studies on real-world datasets, our methods consistently show superiority over first-order methods.
Loading