FELEMN: Toward Efficient Feature-Level Machine Unlearning for Exact Privacy Protection

Published: 01 Jan 2025, Last Modified: 20 Nov 2025IEEE Trans. Knowl. Data Eng. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Data privacy protection legislation around the world has increasingly enforced the “right to be forgotten” regulation, generating a surge in research interest in machine unlearning (MU), which aims to remove the impact of training data from machine learning models upon receiving revocation requests from data owners. There exist two major challenges for the performance of MU: the execution efficiency and the inference interference. The former requires minimizing the computational overhead for each execution of the MU mechanism, while the latter calls for reducing the execution frequency to minimize interference with normal inference services. Nowadays most MU studies focus on the sample-level unlearning setting, leaving the other paramount feature-level setting under-explored. Adapting these existing techniques to the latter turns out to be non-trivial. The only known feature-level work achieves an approximate unlearning guarantee, but suffers from degraded model accuracy and still leaves the inference interference challenge unsolved. We are therefore motivated to propose FELEMN, the first FEature-Level Exact Machine uNlearning method that overcomes both of the above-mentioned hurdles. For the MU execution efficiency challenge, we explore the impact of different feature partitioning strategies on the preservation of semantic relationships for maintaining model accuracy and MU efficiency. For the inference interference challenge, we propose two batching mechanisms to combine as many individual unlearning requests to be processed together as possible, while avoiding potential privacy issues coming with falsely postponing unlearning requests, which is grounded on theoretical analysis. Experiments on five real datasets show that our FELEMN outperforms up-to-date competitors with up to $3\times$ speedup for each MU execution, and 50% runtime reduction by mitigating inference interference.
Loading