FLBoost: On-the-Fly Fine-tuning Boosts Federated Learning via Data-free DistillationDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: federated learning, adversarial learning, on-the-fly fine-tuning
Abstract: Federated Learning (FL) is an emerging distributed learning paradigm for protecting privacy. Data heterogeneity is one of the main challenges in FL, which causes slow convergence and degraded performance. Most existing approaches tackle the heterogeneity challenge by restricting the local model update in client, ignoring the performance drop caused by direct global model aggregation. On the contrary, we propose a new solution: on-the-fly fine-tuning the global model in server via data-free distillation to boost its performance, dubbed FLBoost to relieve the issue of direct model aggregation. Specifically, FLBoost adopts an adversarial distillation scheme to continually transfer the knowledge from local models to fine-tune the global model. In addition, focused distillation and attention-based ensemble techniques are developed to balance the extracted pseudo-knowledge to adapt the data heterogeneity scenario, which implicitly mitigates the distribution discrepancy across clients. Extensive experiments show that our FLBoost can achieve superior performance against the state-of-the-art FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
One-sentence Summary: we propose a new solution that fine-tunes the global model in server via data-free distillation to boost its performance.
8 Replies

Loading