Modern Hopfield Networks for Return Decomposition for Delayed RewardsDownload PDF

12 Oct 2021, 19:37 (modified: 30 Nov 2021, 21:40)Deep RL Workshop NeurIPS 2021Readers: Everyone
Keywords: RL, reinforcement learning, RUDDER, reward redistribution, return decomposition, delayed rewards, Hopfield networks, modern Hopfield networks, associative memory, offline RL, sample efficient RL, human demonstrations, MineRL, Minecraft
TL;DR: Hopfield-RUDDER drastically simplifies environments for RL by identifying important good and bad decisions from only very few training episodes.
Abstract: Delayed rewards, which are separated from their causative actions by irrelevant actions, hamper learning in reinforcement learning (RL). Especially real world problems often contain such delayed and sparse rewards. Recently, return decomposition for delayed rewards (RUDDER) employed pattern recognition to remove or reduce delay in rewards, which dramatically simplifies the learning task of the underlying RL method. RUDDER was realized using a long short-term memory (LSTM). The LSTM was trained to identify important state-action pair patterns, responsible for the return. Reward was then redistributed to these important state-action pairs. However, training the LSTM is often difficult and requires a large number of episodes. In this work, we replace the LSTM with the recently proposed continuous modern Hopfield networks (MHN) and introduce Hopfield-RUDDER. MHN are powerful trainable associative memories with large storage capacity. They require only few training samples and excel at identifying and recognizing patterns. We use this property of MHN to identify important state-action pairs that are associated with low or high return episodes and directly redistribute reward to them. However, in partially observable environments, Hopfield-RUDDER requires additional information about the history of state-action pairs. Therefore, we evaluate several methods for compressing history and introduce reset-max history, a lightweight history compression using the max-operator in combination with a reset gate. We experimentally show that Hopfield-RUDDER is able to outperform LSTM-based RUDDER on various 1D environments with small numbers of episodes. Finally, we show in preliminary experiments that Hopfield-RUDDER scales to highly complex environments with the Minecraft ObtainDiamond task from the MineRL NeurIPS challenge.
Supplementary Material: zip
0 Replies

Loading