CinePile: A Long Video Question Answering Dataset and Benchmark

Published: 28 Oct 2024, Last Modified: 14 Jan 2025Video-Langauge Models PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Short Paper Track (up to 3 pages)
Keywords: Datasets and benchmarking, Video understanding, Multi-modal learning, Visual question answering, Long-form video, Metrics and benchmarks
Abstract: Current long-form video understanding datasets often fail to provide genuine comprehension challenges, as many tasks can be solved by analyzing only a few random frames. To address this issue, we present a novel dataset and benchmark, CinePile, specifically designed for authentic long-form video understanding. This paper details our innovative approach for creating a question-answer dataset, utilizing advanced LLMs with human-in-the-loop and building upon human-generated raw data. Our comprehensive dataset comprises 305,000 multiple-choice questions (MCQs), covering various visual and multimodal aspects, including temporal comprehension, understanding human-object interactions, and reasoning about events or actions within a scene. Additionally, we evaluate recent video-centric LLMs, both open-source and proprietary, on the test split of our dataset. The findings reveal that even state-of-the-art video-centric LLMs significantly lag behind human performance in these tasks, highlighting the complexity and challenge inherent in video understanding.
Supplementary Material: pdf
Submission Number: 38
Loading