Keywords: reinforcement learning, world model, model-based reinforcement learning, crafter, mamba, diffusion
TL;DR: EDELINE is a diffusion based world modeling methodology that outperforms existing baselines.
Abstract: World models represent a promising approach for training reinforcement learning agents with significantly improved sample efficiency. While most world model methods primarily rely on sequences of discrete latent variables to model environment dynamics, this compression often neglects critical visual details essential for reinforcement learning. Recent diffusion-based world models condition generation on a fixed context length of frames to predict the next observation, using separate recurrent neural networks to model rewards and termination signals. Although this architecture effectively enhances visual fidelity, the fixed context length approach inherently limits memory capacity. In this paper, we introduce EDELINE, a unified world model architecture that integrates state space models with diffusion models. Our approach demonstrates superior performance on the memory-demanding Crafter benchmark.
Submission Number: 31
Loading