VideoEval: Comprehensive Benchmark Suite for Low-Cost Evaluation of Video Foundation Model

26 Sept 2024 (modified: 14 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Video Understanding, Video Foundation Model, Benchmark
TL;DR: A vision-centric evaluation method for video foundation models that is comprehensive, challenging, indicative, and low-cost.
Abstract: With the accumulation of high-quality data and advancements in visual pretraining paradigms, recent Video Foundation Models (VFMs) have made significant progress, demonstrating remarkable performance on popular video understanding benchmarks. However, conventional benchmarks (e.g. Kinetics) and evaluation protocols are limited by their relatively poor diversity, high evaluation costs, and saturated performance metrics. In this work, we introduce a comprehensive benchmark suite to address these issues, namely **VideoEval**. We establish the **Vid**eo **T**ask **A**daption **B**enchmark (VidTAB) and the **Vid**eo **E**mbedding **B**enchmark (VidEB) from two perspectives: evaluating the task adaptability of VFMs under few-shot conditions and assessing their feature embedding's direct applicability to downstream tasks. With VideoEval, we conduct a large-scale study of 20 popular open-source vision foundation models. Our study reveals some insightful findings, 1) overall, current VFMs exhibit weak generalization across diverse tasks, 2) increasing video data, whether labeled or in video-text pairs, does not necessarily improve task performance, 3) the effectiveness of some pre-training paradigms may not be fully validated in previous benchmarks, and 4) combining different pre-training paradigms can help develop models with better generalization capabilities. We believe this study serves as a important complement to the current evaluation methods for VFMs and offers valuable insights for future research directions.
Primary Area: datasets and benchmarks
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7392
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview