Communication Efficient Federated Representation Learning

18 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Federated learning, Communication efficiency, Distributed machine learning
Abstract: The Federated Averaging (FedAvg) algorithm is a widely utilized technique in Federated Learning. It follows a recursive pattern where nodes perform a few local stochastic gradient descents (SGD), and then the central server updates the model by taking an average. The primary purpose of conducting model averaging is to mitigate the consensus error that arises between models across different nodes. In our empirical examination, it becomes evident that in non-iid data distribution setting, the consensus error in the initial layers of deep neural network is considerably smaller than that observed in the later layers. This observation hints at the feasibility of applying a less intensive averaging approach for the initial layers. Typically, these layers are designed to extract meaningful representations from the neural network's input. To delve deeper into this phenomenon, we formally analyze it within the context of linear representation. We illustrate that increasing the number of local SGD iterations or reducing the frequency of averaging for the representation extractor leads to enhanced generalizability in the learned model produced by FedAvg's output. The paper is followed with experimental results showing the effectiveness of this method.
Supplementary Material: zip
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1519
Loading