Pheromone Based Independent Reinforcement Learning for Multiagent Navigation

Published: 01 Jan 2021, Last Modified: 13 Nov 2024NCAA 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Multiagent systems (MAS) have been generally applied in numerous applications, including computer networks, robotics, and smart grids due to their flexibility, reliability for complex problem-solving. Communication is an important factor for the multiagent world to stay organized and productive. Previously, most existing studies try to pre-define the communication protocols or adopt additional decision modules for instructing the communication schedule, which induces significant communication cost overhead and cannot generalized to a large collection of agents directly. In this paper, we propose a lightweight communication framework—Pheromone Collaborative Deep Q-Network (PCDQN), which combines deep Q-network with the pheromone-driven stigmergy mechanism. In partially observable environments, this framework exploits the stigmergy as circuitous communication connections among independent reinforcement learning agents. Experiments dependent on the minefield navigation task have shown that PCDQN displays superiority in accomplishing higher learning productivity of multiple agents when contrasted with Deep Q-network (DQN).
Loading