Understanding Complexity in VideoQA via Visual Program Generation

ICLR 2025 Conference Submission1394 Authors

17 Sept 2024 (modified: 27 Nov 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: video understanding, codegen
TL;DR: We propose a data-driven method to assess question complexity in Video Question Answering (VideoQA) based on code primitives, and use it to create a new benchmark, which is nearly twice as difficult for models compared to existing datasets.
Abstract: We propose a data-driven approach to analyzing query complexity in Video Question Answering (VideoQA). Previous efforts in benchmark design have largely relied on human expertise to construct challenging samples. In this work, we experimentally demonstrate that humans struggle to accurately estimate which questions are hard to answer for machine learning models. Our alternative, automated approach takes advantage of recent advances in code generation for visual question answering. In particular, we use generated code complexity as a proxy for the question complexity and demonstrate that it indeed shows a much stronger correlation with the models' performance, compared to human estimates. We then present a novel algorithm for estimating question complexity from code. It identifies fine-grained primitives which correlate with the hardest questions. These human-interpretable results lead to a number of discoveries about the key sources of complexity for VideoQA models. Finally, we extend our approach to generate complex questions for a given set of videos. This allows us to automatically construct a new benchmark, which is 1.9 times harder for VideoQA methods than existing manually designed datasets.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1394
Loading