Analyzing Implicit Regularization In Federated Learning

21 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: federated learning, implicit regularization, backward error analysis, optimization
Abstract: Backward error analysis is a powerful technique that can check how much the path of the gradient flow is modified under the influence of a finite learning rate. Through this technique, it is also possible to find an implicit regularizer that affects the convergence behavior of an optimizer. With a backward error analysis, this paper seeks a more intuitive but quantitative way to understand the convergence behaviour under various federated learning algorithms. We prove that the implicit regularizer for FedAvg disperses the gradient of each client from the average gradient, increasing the gradient variance. We then theoretically present that the implicit regularizer of FedAvg hampers the convergence if the variance of gradients from clients decreases following the gradient of the cost function. In order to verify our analysis, we run experiments on FedAvg with and without the drifting term and confirm that FedAvg without the drifting term shows higher test accuracies. Our analysis also explains the convergence behavior of variance reduction methods such as SCAFFOLD, FedDyn, and FedSAM to show that the implicit regularizers of those methods have a smaller or zero drifting effect when the learning rate is small. Especially, we provide a possible reason FedSAM can perform better than FedAvg but might not perform as well as other stable variance reduction methods under data heterogeneity.
Supplementary Material: pdf
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3156
Loading