Fine-Grained and Efficient Self-Unlearning with Layered Iteration

Published: 2025, Last Modified: 25 Mar 2026IJCAI 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: As machine learning models become widely deployed in data-driven applications, ensuring compliance with the 'right to be forgotten' as required by many privacy regulations is vital for safeguarding user privacy. To forget the given data, existing re-labeling based unlearning methods employ a single-step adjustment scheme that revises the decision boundaries in one re-labeling phase. However, such single-step approaches lead to coarse-grained changes in decision boundaries among the remaining classes and impose adverse effects on the model utility. To address these limitations, we propose 'Self-Unlearning with Layered Iteration (SULI),' a novel unlearning approach that introduces a layered iteration strategy to re-label the forgetting data iteratively and refine the decision boundaries progressively. We further develop a 'Selective Probability Adjustment (SPA)' technique, which uses a soft-label mechanism to promote smoother decision-boundary transitions. Comprehensive experiments on three benchmark datasets demonstrate that SULI achieves superior performance in effectiveness, efficiency, and privacy compared to the state-of-the-art baselines in both class-wise and instance-wise unlearning scenarios. The source code is released at https://github.com/Hongyi-Lyu-MQ/SULI.
Loading