Achieving Exact Federated Unlearning with Improved Post-Unlearning Performance

ICLR 2025 Workshop BuildingTrust Submission139 Authors

11 Feb 2025 (modified: 06 Mar 2025)Submitted to BuildingTrustEveryoneRevisionsBibTeXCC BY 4.0
Track: Long Paper Track (up to 9 pages)
Keywords: Exact Federated Unlearning, Improved Post-Unlearning Performance, Multi-Models Training
TL;DR: This paper proposes methods for federated learning that ensure exact federated unlearning while achieving a better performance post-unlearning than retraining from scratch.
Abstract: Federated learning is a machine learning paradigm that allows multiple clients to train aggregated model via sharing model updates to a central server without sharing their data. Even though the data is not shared, it can indirectly influence the aggregated model via the shared model updates. In many real-life scenarios, we need to completely remove a client's influence (unlearning) from the aggregated model, such as competitive clients who want to remove their influence from the aggregated model (e.g., large language models (LLMs) fine-tuned collaboratively by multiple clients for a specific downstream task) after leaving the coalition to ensure other clients do not benefit from their contributions. The influence removal is also needed when the adversarial client negatively affects the aggregated model. Though the aggregated model can be retrained from scratch to ensure exact unlearning (completely removing the client's influence from the aggregated model), it performs poorly just after the unlearning, which is undesirable during deployment. To overcome this challenge, this paper proposes federated unlearning algorithms that ensure exact unlearning while achieving better performance post-unlearning. The proposed algorithms are problem-agnostic, making them applicable across various domains. Our experimental results further validate the effectiveness of the proposed federated unlearning algorithms in fine-tuning LLMs and performing vision tasks within a federated learning framework using real-world datasets.
Submission Number: 139
Loading