Keywords: green computing, federated learning, statistical heterogeneity, LLM agent
Abstract: Federated Learning (FL), as a privacy-preserving distributed machine learning paradigm, faces significant challenges in terms of data and device heterogeneity in practical applications. In this paper, we present a novel Large Language Model Agent decision system, called Green Federated Learning Agent (GFLAgent), for alleviating the challenges arising from data and device heterogeneity within the FL tasks. GFLAgent is efficient and energy friendly, and meets the requirements of green computing. GFLAgent dynamically monitors the status of each client, selects and reasonably allocates them to different layers to achieve efficient asynchronous training, and responds to unexpected situations during training. Furthermore, to optimize overall system expenditure, we implement a strategy that minimizes local training overhead and the updates costs for clients with historically subpar performance. The experimental results show that GFLAgent outperforms SOTA methods and can be quickly ported to other distributed machine learning frameworks to improve efficiency.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13948
Loading