Curiosity-driven Exploration in Sparse-reward Multi-agent Reinforcement LearningDownload PDF

Published: 20 Jul 2023, Last Modified: 01 Sept 2023EWRL16Readers: Everyone
Keywords: Reinforcement Learning, Sparsity, Exploration, Intrinsic Motivation
Abstract: Sparsity of rewards while applying a deep reinforcement learning method negatively affects its sample efficiency. A viable solution to deal with the sparsity of rewards is to learn via intrinsic motivation which advocates for adding an intrinsic reward to the reward function to encourage the agent to explore the environment and expand the sample space. Though intrinsic motivation methods are widely used to improve data-efficient learning in the reinforcement learning model, they also suffer from the so-called detachment problem. In this article, we discuss the limitations of intrinsic curiosity module in sparse-reward multi-agent reinforcement learning and propose a method called I-Go-Explore that combines the intrinsic curiosity module with the Go-Explore framework to alleviate the detachment problem.
1 Reply

Loading