Federated TD Learning with Linear Function Approximation under Environmental Heterogeneity

TMLR Paper1754 Authors

29 Oct 2023 (modified: 07 Jun 2024)Decision pending for TMLREveryoneRevisionsBibTeX
Abstract: We initiate the study of federated reinforcement learning under environmental heterogeneity by considering a policy evaluation problem. Our setup involves $N$ agents interacting with environments that share the same state and action space but differ in their reward functions and state transition kernels. Assuming agents can communicate via a central server, we ask: \textit{Does exchanging information expedite the process of evaluating a common policy?} To answer this question, we provide the first comprehensive finite-time analysis of a federated temporal difference (TD) learning algorithm with linear function approximation, while accounting for Markovian sampling, heterogeneity in the agents' environments, and multiple local updates to save communication. Our analysis crucially relies on several novel ingredients: (i) deriving perturbation bounds on TD fixed points as a function of the heterogeneity in the agents' underlying Markov decision processes (MDPs); (ii) introducing a virtual MDP to closely approximate the dynamics of the federated TD algorithm; and (iii) using the virtual MDP to make explicit connections to federated optimization. Putting these pieces together, we prove that in a low-heterogeneity regime, exchanging model estimates leads to linear convergence speedups in the number of agents. Our theoretical contribution is significant in that it is the first result of its kind in multi-agent/federated reinforcement learning that complements the numerous analogous results in heterogeneous federated optimization.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Add the motivating examples, relative work and improve the main theorem.
Assigned Action Editor: ~Naman_Agarwal1
Submission Number: 1754
Loading