Research Area: Alignment, Learning algorithms for LMs
Keywords: Unlearning, RLHF, preference optimization
TL;DR: We propose a simple alignment-inspired objective function for machine unlearning, achieving state-of-art performance in TOFU dataset.
Abstract: Large Language Models (LLMs) often memorize sensitive, private, or copyrighted data during pre-training. LLM unlearning aims to eliminate the influence of undesirable data from the pre-trained model while preserving the model's utilities on other tasks. Several practical methods have recently been proposed for LLM unlearning, mostly based on gradient ascent (GA) on the loss of undesirable data. However, on certain unlearning tasks, these methods either fail to effectively unlearn the target data or suffer from catastrophic collapse --- a drastic degradation of the model's utilities.
In this paper, we propose \emph{Negative Preference Optimization} (NPO), a simple alignment-inspired method that could efficiently and effectively unlearn a target dataset. We theoretically show that the progression toward catastrophic collapse by minimizing the NPO loss is exponentially slower than GA. Through experiments
on synthetic data and the benchmark TOFU dataset, we demonstrate that NPO-based methods achieve a better balance between unlearning the undesirable data and maintaining the model's utilities.
We also observe that NPO-based methods generate more sensible outputs than GA-based methods, whose outputs are often gibberish.
Remarkably, on TOFU, NPO-based methods are the first to achieve reasonable unlearning results in forgetting 50\% (or more) of the training data, whereas existing methods already struggle with forgetting 10\% of training data.
Supplementary Material: zip
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 412
Loading