STORM: Efficient Stochastic Transformer based World Models for Reinforcement Learning

Published: 21 Sept 2023, Last Modified: 21 Dec 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: deep learning, reinforcement learning, model-based reinforcement learning, world model, learning in imagination, transformer, variational autoencoders, sequence modeling
TL;DR: We introduce an efficient world model structrue that consists of a variational autoencoder and a GPT-like transformer for model-based reinforcement learning.
Abstract: Recently, model-based reinforcement learning algorithms have demonstrated remarkable efficacy in visual input environments. These approaches begin by constructing a parameterized simulation world model of the real environment through self-supervised learning. By leveraging the imagination of the world model, the agent's policy is enhanced without the constraints of sampling from the real environment. The performance of these algorithms heavily relies on the sequence modeling and generation capabilities of the world model. However, constructing a perfectly accurate model of a complex unknown environment is nearly impossible. Discrepancies between the model and reality may cause the agent to pursue virtual goals, resulting in subpar performance in the real environment. Introducing random noise into model-based reinforcement learning has been proven beneficial. In this work, we introduce Stochastic Transformer-based wORld Model (STORM), an efficient world model architecture that combines the strong sequence modeling and generation capabilities of Transformers with the stochastic nature of variational autoencoders. STORM achieves a mean human performance of $126.7\%$ on the Atari $100$k benchmark, setting a new record among state-of-the-art methods that do not employ lookahead search techniques. Moreover, training an agent with $1.85$ hours of real-time interaction experience on a single NVIDIA GeForce RTX 3090 graphics card requires only $4.3$ hours, showcasing improved efficiency compared to previous methodologies.
Supplementary Material: zip
Submission Number: 4493
Loading