Does Machine Unlearning Truly Remove Knowledge?

Published: 27 Oct 2025, Last Modified: 27 Oct 2025NeurIPS Lock-LLM Workshop 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Machine Unlearning, Model Evaluation, Benchmark, Knowledge Extraction
TL;DR: A framework to systematically audit machine unlearning for LLMs.
Abstract: In recent years, Large Language Models (LLMs) have achieved remarkable advancements, drawing significant attention from the research community. Their capabilities are largely attributed to large-scale architectures, which require extensive training on massive datasets. However, such datasets often contain sensitive or copyrighted content sourced from the public internet, raising concerns about data privacy and ownership. Regulatory frameworks, such as the General Data Protection Regulation (GDPR), grant individuals the right to request the removal of such sensitive information. This has motivated the development of machine unlearning algorithms that aim to remove specific knowledge from models without the need for costly retraining. Despite these advancements, evaluating the efficacy of unlearning algorithms remains a challenge due to the inherent complexity and generative nature of LLMs. In this work, we introduce a comprehensive auditing framework for unlearning evaluation, comprising 3 benchmark datasets, 6 unlearning algorithms, and 5 prompt-based auditing methods. By using various auditing algorithms, we evaluate the effectiveness and robustness of different unlearning strategies. To explore alternatives beyond prompt-based auditing, we propose a novel auditing technique based on intermediate activation perturbation. This approach offers a new perspective and serves as a potential direction for the future design of auditing algorithms. The complete framework and the proposed algorithm will be open-sourced upon manuscript acceptance.
Submission Number: 30
Loading