Multi-Agent Exploration via Self-Learning and Social Learning

Published: 01 Jan 2024, Last Modified: 20 May 2025ICASSP 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Self-learning and social learning stand as two pivotal constituents in multi-agent exploration. Inspired by the fact that animals and humans explore unfamiliar environments to learn survival skills by training themselves using unlabeled data and replicating others’ successful experiences, we propose a multi-agent reinforcement learning method, named Self-Learning and Social Learning (S2L), which aims to address the complex tasks caused by sparse rewards and intricate sequential structures. Specifically, in Self-Learning, we incorporate both task-specific and task-agnostic intrinsic rewards. These incentives steer individual agents towards exploration and comprehension of the environment. Furthermore, in Social Learning, different independent agents can implicitly share the successful experience by observing others in view and without additional communication or parameter-sharing overhead. Finally, experimental evaluation of S2L on the complex task characterized by sparse rewards and intricate sequential structures demonstrates its superior performance against other competing exploration baselines.
Loading