Robust multi-agent reinforcement learning for noisy environmentsDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 12 May 2023Peer-to-Peer Netw. Appl. 2022Readers: Everyone
Abstract: Despite recent advances in reinforcement learning (RL), agents trained by RL are often sensitive to the environment, especially in multi-agent scenarios. Existing multi-agent reinforcement learning methods only work well under the assumption of perfect environment. However, the real world environment is usually noisy. Inaccurate information obtained from a noisy environment will hinder the learning of the agent, and even lead to training failure. In this paper, we focus on the problem of training multiple robust agents in noisy environments. To tackle this problem, we proposed a new algorithm, multi-agent fault-tolerant reinforcement learning (MAFTRL). Our main idea is to establish the agent’s own error detection mechanism and design the information communication medium between agents. The error detection mechanism is based on the autoencoder, which calculates the credibility of each agent’s observation and effectively reduces the environmental noise. The communication medium based on the attention mechanism can significantly improve the ability of agents to extract effective information. Experimental results show that our approach accurately detects the error observation of the agent, which has good performance and strong robustness in both the traditional reliable environments and the noisy environments. Moreover, MAFTRL significantly outperforms the traditional methods in the noisy environments.
0 Replies

Loading