Keywords: World models, autonomous driving, generative models, video generation
TL;DR: We propose a world model for autonomous driving capable of long-term rollouts in challenging scenarios, achieving state-of-the-art-results. The model is trained for next frame prediction on video data.
Abstract: Existing world models for autonomous driving struggle with long-horizon generation and generalization to challenging scenarios. In this work, we develop a model using simple design choices, and without additional supervision or sensors, such as maps, depth, or multiple cameras.
We show that our model yields state-of-the-art performance, despite having only 469M parameters and being trained on 280h of video data. It particularly stands out in difficult scenarios like turning maneuvers and urban traffic. We test whether discrete token models possibly have advantages over continuous models based on flow matching. To this end, we set up a hybrid tokenizer that is compatible with both approaches and allows for a side-by-side comparison. Our study concludes in favor of the continuous autoregressive model, which is less brittle on individual design choices and more powerful than the model built on discrete tokens. Project page with code, model checkpoints and visualization can be found here: https://lmb-freiburg.github.io/orbis.github.io/
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 21535
Loading