InstantForget: Training-Free Functional Feature Unlearning via Subspace Projection and Inference-Time Smoothing
Keywords: Training-Free Unlearning, Subspace Projection, Inference-Time Smoothing, Feature Sensitivity Suppression, Federated Deployment
TL;DR: InstantForget
Abstract: The demand for efficient machine unlearning is rising as deployed models in safety-critical and privacy-sensitive domains must comply with regulations such as GDPR and CCPA, which grant the ``right to be forgotten.'' In federated learning (FL), where data are distributed and communication is expensive, forgetting must be performed without retraining from scratch or sacrificing model utility. Existing approaches typically implement unlearning by parameter retraining or fine-tuning, incurring high computational cost, requiring access to the retain set, and adding global communication rounds. We introduce \textbf{InstantForget}, a training-free framework that achieves \emph{functional unlearning} by editing the input–output mapping of a pretrained model purely at inference time. InstantForget operates in two stages: (i) a \textit{subspace projection} step that estimates trigger-sensitive directions from paired features and cancels their linear contributions via orthogonal projection, and (ii) a \textit{gated randomized smoothing} step that suppresses residual nonlinear dependencies by perturb-and-aggregate inference restricted to sensitive coordinates. Our method preserves accuracy on the retain set while driving model behavior on the forget set close to that of a retrained model, achieving near-zero forgetting gap with no parameter updates or FL communication. Experiments on MNIST, CIFAR-10, and ImageNet-Subset show up to $90\%$ reduction in attack success rate with under $1\%$ drop in clean accuracy, highlighting InstantForget as a practical and energy-efficient solution for post-hoc deployment.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 17093
Loading