Time Blindness: Why Video-Language Models Can’t See What Humans Can?

16 Sept 2025 (modified: 12 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Vision Language Models, Temporal Understanding, Benchmark Construction, Low SNR dataset
TL;DR: All tested vision-language models that support video understanding (including GPT-4o and Gemini) achieve 0%, exposing fundamental "time blindness" in current AI.
Abstract: Recent advances in vision–language models (VLMs) have made impressive strides in understanding spatio-temporal relationships in videos. However, when spatial information is obscured, these models struggle to capture purely temporal patterns. We introduce $\textbf{SpookyBench}$, a benchmark where information is encoded solely in temporal sequences of noise-like frames, mirroring natural phenomena from biological signaling to covert communication. Interestingly, while humans can recognize shapes, text, and patterns in these sequences with over 98\% accuracy, state-of-the-art VLMs achieve 0\% accuracy. This performance gap highlights a critical limitation: an over-reliance on frame-level spatial features and an inability to extract meaning from temporal cues. Overcoming this limitation will require novel architectures or training paradigms that decouple spatial dependencies from temporal processing. Our systematic analysis shows that this issue persists across model scales and architectures. We release SpookyBench to catalyze research in temporal pattern recognition and bridge the gap between human and machine video understanding. Dataset is available at this anonymous link: https://tinyurl.com/spooky-bench
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 7921
Loading