MET-Bench: Multimodal Entity Tracking for Evaluating the Limitations of Vision-Language and Reasoning Models
Keywords: entity tracking, multimodal
TL;DR: Evaluates VLM and reasoning models on a novel multimodal entity tracking benchmark, finding limitations.
Abstract: We introduce MET-Bench, a multimodal entity tracking benchmark designed to evaluate the ability of vision-language models to track entity states across modalities. Using two structured domains, Chess and the Shell Game, we assess how frontier models integrate textual and image-based state updates. Our findings reveal a significant performance gap between text-based and image-based tracking. We show this performance gap stems from deficits in visual reasoning rather than perception and that explicit text-based reasoning strategies improve performance, yet limitations remain, especially in long-horizon multimodal scenarios. MET-Bench highlights the need for improved multimodal representations and reasoning techniques to bridge the gap between textual and visual entity tracking.
Submission Number: 19
Loading