Abstract: Federated learning (FL) enables multiple clients to collaboratively train a global model by exchanging model parameters or gradients without sharing their private data. However, these shared updates inherently embed the influence of clients’ local datasets into the global model. In practical applications, it is often necessary to quickly remove or unlearn a specific client’s influence from the global model to comply with data privacy regulations or to mitigate the impact of malicious ones while minimizing operational downtime and security vulnerabilities. To address these challenges, we propose two FL-agnostic algorithms called BMT and MMT, which ensure the complete removal of the client’s influence with minimal delay. While BMT derives a new global model initialization by aggregating isolated pre-trained local models, MMT selectively aggregates sub-FL models trained across disjoint client subsets to better capture their cross-influence on the global model. We empirically show that both algorithms lead to improved post-unlearning performance across different data modalities and model architectures.
Submission Number: 71
Loading