Keywords: Video Generation, benchmark, world model
Abstract: Video generation models have rapidly progressed, positioning themselves as video world models capable of supporting decision-making applications like robotics and autonomous driving. However, current benchmarks fail to rigorously evaluate these claims, focusing only on general video quality, ignoring important factors to world models such as physics adherence.
To bridge this gap, we propose WorldModelBench, a benchmark designed to evaluate the world modeling capabilities of video generation models in application-driven domains. WorldModelBench offers two key advantages: (1) Against to nuanced world modeling violations: By incorporating instruction-following and physics-adherence dimensions, WorldModelBench detects subtle violations, such as irregular changes in object size that breach the mass conservation law—issues overlooked by prior benchmarks. (2) Aligned with large-scale human preferences: We crowd-source 67K human labels to accurately measure 14 frontier models.
Using our high-quality human labels, we further fine-tune an accurate judger to automate the evaluation procedure, achieving 9.9% lower error in predicting world modeling violations than GPT-4o with 2B parameters.
In addition, we demonstrate that training to align human annotations by maximizing the rewards from the judger noticeably improve the world modeling capability. The dataset is hosted in HuggingFace at https://huggingface.co/datasets/Efficient-Large-Model/worldmodelbench. The code to run evaluation is available at https://github.com/WorldModelBench-Team/WorldModelBench.
Croissant File: json
Dataset URL: https://huggingface.co/datasets/Efficient-Large-Model/worldmodelbench
Code URL: https://github.com/WorldModelBench-Team/WorldModelBench
Primary Area: Datasets & Benchmarks for applications in computer vision
Submission Number: 2378
Loading