BR-DeFedRL: Byzantine-Robust Decentralized Federated Reinforcement Learning with Fast Convergence and Communication Efficiency
Abstract: In this paper, we propose Byzantine-Robust Decentralized Federated Reinforcement Learning (BR-DeFedRL), an innovative framework that effectively combats the harmful influence of Byzantine agents by adaptively adjusting communication weights, thereby significantly enhancing the robustness of the learning system. By leveraging decentralized learning, our approach eliminates the dependence on a central server. Striking a harmonious balance between communication round count and sample complexity, BR-DeFedRL achieves efficient convergence with a rate of $\mathcal{O}\left( {\frac{1}{{TN}}} \right)$, where T denotes the communication rounds and N represents the local steps related to variance reduction. Notably, each agent attains an ϵ-approximation with a state-of-the-art sample complexity of $\mathcal{O}\left( {\frac{1}{{\varepsilon N}} + \frac{1}{\varepsilon }} \right)$. Extensive experimental validations further affirm the efficacy of BR-DeFedRL, making it a promising and practical solution for Byzantine-robust decentralized federated reinforcement learning.
External IDs:dblp:conf/infocom/QiaoZYYCZRY24
Loading