Keywords: Federated Learning, Federated Unlearning
Abstract: Federated unlearning enables the removal of a specific client's data contribution from a trained federated model, thereby avoiding the substantial computational cost of complete retraining. However, existing methods suffer from high memory overhead, training instability, and performance degradation on remaining clients, particularly in non-IID settings. These challenges arise from fundamental issues including gradient explosion and the conflict between forgetting and retaining gradients. To address these limitations, we propose Federated Unlearning with GrAdient Shielding (FUGAS), which integrates a novel forgetting loss with a flexible gradient projection to achieve efficient unlearning while preserving model utility, all without storing extensive historical information. Specifically, we formulate unlearning as a preference optimization problem. The model's original predictions on the data to be forgotten serve as a negative reference, and our objective function encourages the model's current outputs to diverge from this reference, effectively erasing the targeted knowledge. Concurrently, during the server aggregation phase, gradients from unlearning clients are projected onto a dynamically estimated compatibility subspace derived from the gradients of retained clients, which ensures directional coherence and mitigates destructive interference between competing updates. Furthermore, we provide theoretical guarantees that our novel forgetting loss prevent gradient explosion, and that the projection ensures a non-increase in risk on the retained tasks. Extensive experiments demonstrate that FUGAS not only achieves thorough unlearning but also consistently maintains or even improves the model's accuracy on retained data.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 14846
Loading