Adaptive Interest for Emphatic Reinforcement LearningDownload PDF

Published: 14 Jun 2022, Last Modified: 05 May 2023DARL 2022 SpotlightReaders: Everyone
TL;DR: We propose a way to automatically learn the interest function of emphatic algorithms and verify our approach on a wide range of environments.
Abstract: Emphatic algorithms have shown great promise in stabilizing and improving reinforcement learning by selectively emphasizing the update rule. Although the emphasis fundamentally depends on an interest function which defines the intrinsic importance of each state, most approaches simply adopt a uniform interest over all states (except where a hand-designed interest is possible based on domain knowledge). In this paper, we investigate adaptive methods that allow the interest function to dynamically vary over states and iterations. In particular, we leverage meta-gradients to automatically discover online an interest function that would accelerate the agent’s learning process. Empirical evaluations on a wide range of environments show that adapting the interest is key to provide significant gains. Qualitative analysis indicates that the learned interest function emphasizes states of particular importance, such as bottlenecks, which can be especially useful in a transfer learning setting.
0 Replies

Loading