Action Guidance: Getting the Best of Sparse Rewards and Shaped Rewards for Real-time Strategy GamesDownload PDF

28 Sept 2020 (modified: 22 Oct 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: reinforcement learning, real-time strategy games, sparse rewards, shaped rewards, policy gradient, sample-efficiency
Abstract: Training agents using Reinforcement Learning in games with sparse rewards is a challenging problem, since large amounts of exploration are required to retrieve even the first reward. To tackle this problem, a common approach is to use reward shaping to help exploration. However, an important drawback of reward shaping is that agents sometimes learn to optimize the shaped reward instead of the true objective. In this paper, we present a novel technique that we call action guidance that successfully trains agents to eventually optimize the true objective in games with sparse rewards while maintaining most of the sample efficiency that comes with reward shaping. We evaluate our approach in a simplified real-time strategy (RTS) game simulator called $\mu$RTS.
One-sentence Summary: Training agents to eventually optimize the real objective without losing the sample efficiency of reward shaping.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2010.03956/code)
Reviewed Version (pdf): https://openreview.net/references/pdf?id=c3EIeu6hsi
9 Replies

Loading