Knowledge Is Not Wisdom: Weight Balancing Mechanism for Local and Global Training in Federated Learning

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Federated learning, Data heterogeneity, Non-IID, Client drift, Canceling-out
TL;DR: We employ a local balancer to mitigate biases in favor of specific classes and an aggregation balancer to regulate biases toward certain clients.
Abstract: Federated learning (FL) is a unique approach that typically leverages client-side computing resources and data on edge devices. Data heterogeneity is a primary challenge that makes federated learning complex, and many studies have been conducted to address this issue. In previous studies, solutions were primarily focused on the client side, such as adjusting the weights of the local model or using proxy data from the aggregation server. However, we identified a problem where the global model becomes biased due to averaging the client’s model, depending on the amount of the client’s data or the extent of data sharing. Therefore, we introduce local and aggregation balancers for federated learning (FedBal), which respectively mediate the local training by class distribution and the weight aggregation by specific clients. We employ a local balancer to mitigate biases in favor of specific classes and an aggregation balancer to regulate biases toward certain clients. Remarkably, through experiments applying various existing methods with an aggregation balancer, we found that reflecting the models of marginalized clients more than those of clients with abundant data and classes can improve the accuracy of the global model by 2\%–7\%. FedBal, which combines two Balancers, exhibited an average accuracy improvement of 3\%–4\% compared to all other methods. This study raises several questions for further work to deepen our understanding of the role of the aggregation framework in FL.
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4846
Loading