SurveillanceVQA-589K: A Benchmark for Comprehensive Surveillance Video-Language Understanding with Large Models
Keywords: multimodal learning; benchmark; surveillance video;QA
TL;DR: SurveillanceVQA-589K provides a practical and comprehensive resource for advancing video-language understanding in safety-critical applications.
Abstract: Understanding surveillance video content remains a critical yet underexplored challenge in vision–language research, particularly due to its real-world complexity, irregular event dynamics, and safety-critical implications. In this work, we introduce SurveillanceVQA-589K, the largest open-ended video question answering (VQA) benchmark tailored to the surveillance domain. The dataset comprises 589,380 QA pairs spanning 12 cognitively diverse question types, including temporal reasoning, causal inference, spatial understanding, and anomaly interpretation, across both normal and abnormal video scenarios. To construct the benchmark at scale, we design a hybrid annotation pipeline that combines temporally aligned human-written captions with Large Vision-Language Model~(LVLM) assisted QA generation using prompt-based techniques. We also propose a multi-dimensional evaluation protocol to assess contextual, temporal, and causal comprehension. We evaluate 12 LVLMs under this framework, revealing significant performance gaps, especially in causal and anomaly-related tasks, underscoring the limitations of current models in real-world surveillance contexts. Our benchmark provides a practical and comprehensive resource for advancing video-language understanding in safety-critical applications such as intelligent monitoring, incident analysis, and autonomous decision-making. The dataset is publicly available at: https://anonymous.4open.science/r/SurveillanceVQA-589K.
Primary Area: datasets and benchmarks
Submission Number: 721
Loading