- Keywords: Attention, Neural Attention, Reinforcement Learning, Multi-agent Reinforcement Learning
- Abstract: Many potential applications of reinforcement learning (RL) in the real world involve interacting with other agents whose numbers vary over time. We propose new neural architectures for these multi-agent RL problems. In contrast to other methods of training an individual, discrete policy for each agent and then enforcing cooperation through some additional inter-policy mechanism, we propose learning multi-agent relationships at the policy level by using an attentional architecture. In our method, all agents share the same policy, but independently apply it in their own context to aggregate the other agents' state information when selecting their next action. The structure of our architectures allow them to be applied on environments with varying numbers of agents. We demonstrate our architecture on a benchmark multi-agent autonomous vehicle coordination problem, obtaining superior results to a full-knowledge, fully-centralized reference solution, and significantly outperforming it when scaling to large numbers of agents.