Achieving Communication-Efficient Policy Evaluation for Multi-Agent Reinforcement Learning: Local TD-Steps or Batching?
Abstract: In many consensus-based actor-critic multi-agent reinforcement learning (MARL) strategies, one of the key components is the MARL policy evaluation (PE) problem, where a set of $N$ agents work cooperatively to evaluate the value function of the global states under a given policy only through communicating with their neighbors.
In MARL-PE, a critical challenge is how to lower the communication complexity, which is defined as the rounds of communication between neighboring nodes in order to converge to some $\epsilon$-stationary point.
To lower communication complexity in MARL-PE, there exist two ``natural'' ideas: i) using batching to reduce the variance of TD (temporal difference) errors, which in turn improves the convergence rate of MARL-PE; and ii) performing multiple local TD update steps between each consecutive rounds of communication, so as to reduce the communication frequency.
While the effectiveness of the batching approach has been verified and relatively well-understood, the validity of the local TD-steps approach remains unclear due to the potential ``agent-drift'' phenomenon resulted from various heterogeneity factors across agents.
This leads to an interesting open question in MARL-PE: *Does the local TD-steps approach really work and how does it perform in comparison to the batching approach?*
In this paper, we take the first attempt to answer this interesting and fundamental question.
Our theoretical analysis and experimental results confirm that allowing multiple local TD steps is indeed a valid approach in lowering the communication complexity of MARL-PE compared to vanilla consensus-based MARL-PE algorithms.
Specifically, the local TD steps between two consecutive communication rounds can be as large as
$\mathcal{O}(\sqrt{1/\epsilon}\log{(1/\epsilon)})$ in order to converge to an $\epsilon$-stationary point of MARL-PE.
Theoretically, we show that in order to reach the optimal sample complexity up to a log factor, the communication complexity is $\mathcal{O}(\sqrt{1/\epsilon}\log{(1/\epsilon)})$, which is *considerably worse* than TD learning with batching, whose communication complexity is $\mathcal{O}(\log (1/\epsilon))$. However, the experimental results show that the allowing multiple steps can be as good as the batch approach.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics)
Supplementary Material: zip
11 Replies
Loading