Keywords: LLM Agent, Federated Learning
TL;DR: We explored FedAgent, a new collaborative paradigm to train LLM agents across distributed clients without data sharing, and built FedAgentGym, the first decentralized agent learning environment to investigate its effectiveness and impact factors.
Abstract: Autonomous AI Agents powered by LLMs have shown remarkable abilities in diverse domains. However, the training process typically require centralized collection of large amounts of real-world user data, posing substantial privacy and regulatory concerns. To this end, we explore a new decentralized training paradigm, namely FedAgent (Federated Agent Reinforcement Learning), which enables collaborative learning of AI agents across distributed clients without sharing local data. Moreover, we construct the first decentralized agent learning environment FedAgentGym, which includes four types of LLM agents, two application scenarios (WebShop and ALFWorld), three variations of decentralized settings, and three newly defined heterogeneity challenges (Preference Heterogeneity, Coverage Heterogeneity, and Hardness Heterogeneity), to systematically investigate its effectiveness and impact factors. Extensive theoretical and empirical studies show that FedAgent can have comparable performance to the centralized training paradigm and exhibit strong robustness against heterogeneities, which shows the feasibility of training AI agents without sacrificing data privacy. The code is available.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 21121
Loading