Keywords: Federated Learning, AI Safety, Autonomous Driving, Drug Discovery, Clinical Diagnosis, Recommender Systems
TL;DR: Elastic aggregation works well with other federated optimizers and achieves significant improvements across the board.
Abstract: Federated learning enables the privacy-preserving training of neural network models using real-world data across distributed clients.
FedAvg has become the preferred optimizer for federated learning because of its simplicity and effectiveness.
FedAvg uses naïve aggregation to update the server model, interpolating client models based on the number of instances used in their training.
However, naïve aggregation suffers from client-drift when the data is heterogenous~(non-IID), leading to unstable and slow convergence.
In this work, we propose a novel aggregation approach, elastic aggregation, to overcome these issues. Elastic aggregation interpolates client models adaptively according to parameter sensitivity, which is measured by computing how much the overall prediction function output changes when each parameter is changed. This measurement is performed in an unsupervised and online manner.
Elastic aggregation reduces the magnitudes of updates to the more sensitive parameters so as to prevent the server model from drifting to any one client distribution, and conversely boosts updates to the less sensitive parameters to better explore different client distributions.
Empirical results on real and synthetic data as well as analytical results show that elastic aggregation leads to efficient training in both convex and non-convex settings, while being fully agnostic to client heterogeneity and robust to large numbers of clients, partial participation, and imbalanced data.
Finally, elastic aggregation works well with other federated optimizers and achieves significant improvements across the board.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Supplementary Material: zip
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
5 Replies
Loading