Keywords: Multi-agent Reinforcement Learning, Adaptive TD-Lambda, Likelihood-Free Density Ratio, Parametric Importance Sampling
Abstract: Recent advancements in multi-agent reinforcement learning (MARL) have prominently leveraged Temporal Difference Lambda, TD($\lambda$), as a catalyst for expediting the temporal difference learning process in value functions. TD($\lambda$) in value-based MARL algorithms or the Temporal Difference critic learning in Actor-Critic-based (AC-based) algorithms synergistically integrate elements from Monte-Carlo simulation and Q function bootstrapping via dynamic programming, which effectively addresses the inherent bias-variance trade-off in value estimation. Based on that, some recent works link the adaptive $\lambda$ value to the policy distribution in the single-agent reinforcement learning area.However, because of the large joint action space, the large observation space, and the limited transition data in Multi-agent Reinforcement Learning, the computation of policy distribution is infeasible to be calculated statistically.
To solve the policy distribution calculation problem in MARL settings, we employ a parametric likelihood-free density ratio estimator with two replay buffers instead of calculating statistically. The two replay buffers of different sizes store the historical trajectories that represent the data distribution of the past and current policies correspondingly. Based on the estimator, we assign Adaptive TD($\lambda$), \textbf{ATD($\lambda$)}, values to state-action pairs based on their likelihood under the stationary distribution of the current policy.
We apply the proposed method on two competitive baseline methods, QMIX for value-based algorithms, and MAPPO for AC-based algorithms, over SMAC benchmarks and Gfootball academy scenarios, and demonstrate consistently competitive or superior performance compared to other baseline approaches with static $\lambda$ values.
Supplementary Material: zip
Primary Area: reinforcement learning
Submission Number: 11903
Loading