TransDreamer: Reinforcement Learning with Transformer World ModelsDownload PDF

12 Oct 2021 (modified: 22 Oct 2023)Deep RL Workshop NeurIPS 2021Readers: Everyone
Keywords: Model-Based Reinforcement Learning, Transformer World Models, Representation Learning
TL;DR: We propose a transformer-based model-based reinforcement learning agent that can solve complex tasks that require long-range memory and memory-based reasoning.
Abstract: The Dreamer agent provides various benefits of Model-Based Reinforcement Learning (MBRL) such as sample efficiency, reusable knowledge, and safe planning. However, its world model and policy networks inherit the limitations of recurrent neural networks and thus an important question is how an MBRL framework can benefit from the recent advances of transformers and what the challenges are in doing so. In this paper, we propose a transformer-based MBRL agent, called TransDreamer. We first introduce the Transformer State-Space Model, a world model that leverages a transformer for dynamics predictions. We then share this world model with a transformer-based policy network and obtain stability in training a transformer-based RL agent. In experiments, we apply the proposed model to 2D visual RL and 3D first-person visual RL tasks both requiring long-range memory access for memory-based reasoning. We show that the proposed model outperforms Dreamer in these complex tasks.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2202.09481/code)
0 Replies

Loading