TL;DR: We show that by using in-context unlearning we can have near constant time exact unlearning of fine-tuning data.
Abstract: Modern machine learning models are expensive to train, and there is a growing concern about the challenge of retroactively removing specific training data. Achieving exact unlearning in deep learning pipelines—producing models as if certain data had never been included in training—remains an open problem. In this paper, we revisit exact unlearning in deep learning and show that for large language models (LLMs) we can efficiently exactly unlearn ``fine-tuning data" (the data used to adapt a pre-trained model). This follows from two observations. First, we can use in-context learning to adapt the LLM to the fine-tuning dataset instead of SGD based algorithms. Second, we show that accurate in-context learning can be done with quantized k-means, which allows for effectively constant time unlearning operations. Our evaluation shows that this unlearning recipe has similar performance to fine-tuning alternatives, but vastly reduces the unlearning costs. Our study also highlights the need for new measures of unlearning cost when adapting the learning algorithm to have faster unlearn operations.
Lay Summary: After deploying a model, it may become necessary to "unlearn" some of the original training data. Exactly unlearning training data has been expensive for deep learning, and in this paper we showed that it can be efficient when adapting a pre-trained LLM to a task. This followed from observing that a sometimes effective learning algorithm is pre-pending training examples to the prompt given to an LLM. We studied ways of unlearning this selection of examples, and found we could do so with costs independent of the model and dataset size. We also observed all past efforts to making unlearning faster also increased inference cost, and proposed new metrics to capture this trade-off.
Primary Area: Social Aspects->Privacy
Keywords: Exact Unlearning, Large Language Models, In-Context Learning
Submission Number: 8049
Loading