When Do Transformers Shine in RL? Decoupling Memory from Credit Assignment

Published: 21 Sept 2023, Last Modified: 21 Dec 2023NeurIPS 2023 oralEveryoneRevisionsBibTeX
Keywords: Memory-based RL, Transformers, Credit Assignment, Online RL, Model-free RL
TL;DR: Transformers can help learn long-term memory but not long-term credit assignment in online model-free RL.
Abstract: Reinforcement learning (RL) algorithms face two distinct challenges: learning effective representations of past and present observations, and determining how actions influence future returns. Both challenges involve modeling long-term dependencies. The Transformer architecture has been very successful to solve problems that involve long-term dependencies, including in the RL domain. However, the underlying reason for the strong performance of Transformer-based RL methods remains unclear: is it because they learn effective memory, or because they perform effective credit assignment? After introducing formal definitions of memory length and credit assignment length, we design simple configurable tasks to measure these distinct quantities. Our empirical results reveal that Transformers can enhance the memory capability of RL algorithms, scaling up to tasks that require memorizing observations $1500$ steps ago. However, Transformers do not improve long-term credit assignment. In summary, our results provide an explanation for the success of Transformers in RL, while also highlighting an important area for future research and benchmark design. Our code is open-sourced at https://github.com/twni2016/Memory-RL.
Supplementary Material: pdf
Submission Number: 487
Loading