Keywords: Federated Learning
Abstract: Federated Learning (FL) is a distributed framework for collaborative model training over large-scale distributed data. Centralized FL leverages a server to aggregate client models which can enable higher performance while maintaining client data privacy. However, it has been shown that in centralized model aggregation, performance can degrade in the presence of non-IID data across different clients. We remark that training a client locally on more data than necessary does not benefit the overall performance of all clients. In this paper, we devise a novel framework that leverages Deep Reinforcement Learning (DRL) to optimize an agent that selects the optimal amount of data necessary to train a client model without oversharing information with the server. Starting from complete unawareness of the client's performance, the DRL agent utilizes the change in training loss as a reward signal and learns to optimize the amount of data necessary for improving the client's performance. Specifically, after each aggregation round, the DRL algorithm considers the local performance as the current state and outputs the optimal weights for each class in the training data to be used during the next round of local training. In doing so, the agent learns a policy that creates the optimal partition of the local training dataset during the FL rounds. After FL, the client utilizes the entire local training dataset to further enhance its performance on its own data distribution, mitigating the non-IID effects of aggregation. Through extensive experiments, we demonstrate that training FL clients through our algorithm results in superior performance on multiple benchmark datasets and FL frameworks.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5287
Loading