ON THE CONVERGENCE OF CYCLIC HIERARCHICAL FEDERATED LEARNING WITH HETEROGENEOUS DATA

24 Sept 2024 (modified: 22 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: hierarchical federated learning, convergence analysis, cyclic pattern
Abstract:

Hierarchical Federated Learning (HFL) advances the classic Federated Learning (FL) by introducing the multi-layer architecture between clients and the central server, in which edge servers aggregate models from respective clients and further send to the central server. Instead of directly uploading each update from clients for aggregation, the HFL not only reduces the communication and computational overhead but also greatly enhances the scalability of supporting a massive number of clients. When HFL operates for applications having a large-scale clients, edge servers train their models in a cyclic pattern (a ring architecture) as opposed to the star-type of architecture where each edge develops their own models independently.We refer it as Cyclic HFL(CHFL). Driven by its promising feature of handling data heterogeneity and resiliency, CHFL has a great potential to be deployed in practice. Unfortunately, the thorough convergence analysis on CHFL remains lacking, especially considering the widely-existing data heterogeneity issue among clients. To the best of our knowledge, we are the first to provide a theoretical convergence analysis for CHFL in strongly convex, general convex, and non-convex objectives. Our results demonstrate the convergence rate are $\tilde{\mathcal{O}}(1/MNRKT)$ for strongly convex objective, $\mathcal{O}(1/\sqrt{MNRKT})$ for general convex objective, and $\mathcal{O}(1/\sqrt{MNRKT})$ for non-convex objective, under standard assumptions. Here, $M$ is the number of edge servers, $N$ is the number of clients in edge, $K$ is local steps in client, and $R$ is the edge training round. Through extensive experiments on real-world datasets, besides validating our theoretical findings, we further show CHFL achieves a comparable or superior performance when accounting for both inter- and intra-edge data heterogeneity.

Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3701
Loading