Towards Evaluation for Real-World LLM Unlearning

ICLR 2026 Conference Submission15878 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: machine unlearning, large language model, evaluation metric
Abstract: This paper analyzes the limitations of existing unlearning evaluation metrics in terms of practicality, exactness, and robustness in real-world LLM unlearning scenarios. In addition, the differences between core tokens and non-core tokens are revealed in unlearning. To overcome these limitations, we propose a new metric called Distribution Correction-based Unlearning Evaluation (DCUE). It identifies core tokens and corrects distributional biases in their confidence scores using a validation set. The final evaluation results are quantified using the Kolmogorov–Smirnov test. Experimental results demonstrate that DCUE overcomes the limitations of existing metrics, which also guides the design of more practical and reliable unlearning algorithms in the future.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 15878
Loading