Fault-Tolerant Federated Reinforcement Learning with Theoretical GuaranteeDownload PDF

21 May 2021, 20:46 (modified: 30 Oct 2021, 01:58)NeurIPS 2021 PosterReaders: Everyone
Keywords: Reinforcement learning, federated learning, Byzantine-tolerant optimization
TL;DR: This paper provides the theoretical ground to study the sample efficiency of Federated Reinforcement Learning with respect to the number of participating agents, accounting for faulty agents.
Abstract: The growing literature of Federated Learning (FL) has recently inspired Federated Reinforcement Learning (FRL) to encourage multiple agents to federatively build a better decision-making policy without sharing raw trajectories. Despite its promising applications, existing works on FRL fail to I) provide theoretical analysis on its convergence, and II) account for random system failures and adversarial attacks. Towards this end, we propose the first FRL framework the convergence of which is guaranteed and tolerant to less than half of the participating agents being random system failures or adversarial attackers. We prove that the sample efficiency of the proposed framework is guaranteed to improve with the number of agents and is able to account for such potential failures or attacks. All theoretical results are empirically verified on various RL benchmark tasks.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Code: https://github.com/flint-xf-fan/Byzantine-Federeated-RL
14 Replies