FedEED: Efficient Federated Distillation with Ensemble of Aggregated ModelsDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023Submitted to ICLR 2023Readers: Everyone
Keywords: Federated Learning, Knowledge Distillation
Abstract: In this paper, we study the key components of the knowledge distillation-based model aggregation in federated learning (FL). We first propose a generalized distillation framework where the process of federated distillation is divided into three key stages. By investigating the contributions of each stage, we introduce a new FL framework, named Federated Efficient Ensemble Distillation (FedEED), where the ensemble teacher is created based on aggregated models. Experiment results showed that FedEED outperforms the state-of-the-art methods, including FedAvg and FedDF, on the benchmark datasets. Besides performance, FedEED also demonstrated improved scalability and privacy when compared with existing distillation-based aggregation algorithms. In particular, FedEED does not require direct access to users' model, which can protect the users' privacy. Furthermore, due to the ensemble created by aggregated models, FedEED is highly scalable, and the asymmetric distillation scheme allows parallelism between server-side distillation and clients-side local training, which could speed up the training of large scale learning system.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
12 Replies

Loading