Adapting Vision-Language Models for Evaluating World Models

Published: 10 Jun 2025, Last Modified: 14 Jul 2025ICML 2025 World Models WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: world models, evaluation, vision-language models, generative models
TL;DR: We propose a protocol for evaluating world models and introduce UNIVERSE, a method that adapts vision-language models to support it via unified finetuning.
Abstract: World models—generative models that simulate environment dynamics conditioned on past observations and actions—are gaining prominence in planning, simulation, and embodied AI. However, evaluating their rollouts remains a fundamental challenge, requiring fine-grained, temporally grounded assessment of action alignment and semantic consistency -- capabilities not captured by existing metrics. Vision-Language Models (VLMs) have shown promise as automatic evaluators of generative content due to their strong multimodal reasoning abilities. Yet, their use in fine-grained, temporally sensitive evaluation tasks remains limited and requires targeted adaptation. We introduce a evaluation protocol targeting two recognition tasks -- action recognition and character recognition -- each assessed across binary, multiple-choice, and open-ended formats. To support this, we present UNIVERSE (UNIfied Vision-language Evaluator for Rollouts in Simulated Environments), a method for adapting VLMs to rollout evaluation under data and compute constraints. The resulting unified evaluator matches the performance of task-specific baselines using a single checkpoint. Alignment with human judgments is additionally explored in an accompanying study, establishing UNIVERSE as a scalable, semantics-aware evaluator for world models.
Submission Number: 36
Loading