everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
With the accumulation of high-quality data and advancements in visual pretraining paradigms, recent Video Foundation Models (VFMs) have made significant progress, demonstrating remarkable performance on popular video understanding benchmarks. However, conventional benchmarks (e.g. Kinetics) and evaluation protocols are limited by their relatively poor diversity, high evaluation costs, and saturated performance metrics. In this work, we introduce a comprehensive benchmark suite to address these issues, namely VideoEval. We establish the Video Task Adaption Benchmark (VidTAB) and the Video Embedding Benchmark (VidEB) from two perspectives: evaluating the task adaptability of VFMs under few-shot conditions and assessing their feature embedding's direct applicability to downstream tasks. With VideoEval, we conduct a large-scale study of 20 popular open-source vision foundation models. Our study reveals some insightful findings, 1) overall, current VFMs exhibit weak generalization across diverse tasks, 2) increasing video data, whether labeled or in video-text pairs, does not necessarily improve task performance, 3) the effectiveness of some pre-training paradigms may not be fully validated in previous benchmarks, and 4) combining different pre-training paradigms can help develop models with better generalization capabilities. We believe this study serves as a important complement to the current evaluation methods for VFMs and offers valuable insights for future research directions.