FrameOracle: Learning What to See and How Much to See in Videos

ICLR 2026 Conference Submission22308 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Video Understanding, Adaptive Frame Sampling, Vision-Language Model, Video Large Language Model
TL;DR: FrameOracle is a lightweight, plug-and-play module that jointly predicts which frames and how many to keep for efficient video understanding.
Abstract: Vision-language models (VLMs) have advanced video understanding, but their performance is limited by the number of input frames they can process. Existing frame sampling strategies, such as uniform or fixed-budget selection, often fail to adapt to variations in information density or task complexity, resulting in inefficiency and information loss. To address this, we present **FrameOracle**, a lightweight and plug-and-play module that predicts both (1) which frames are most relevant to a given query and (2) how many frames are needed. FrameOracle is trained using a four-stage curriculum, with the first three stages relying on weak proxy signals such as cross-modal similarity. In the final stage, it leverages stronger supervision from a new dataset we introduce, **FrameOracle-41K**, the first large-scale VideoQA collection to provide keyframe annotations specifying the minimal set of frames required to answer each question. Extensive experiments across five VLMs and six benchmarks demonstrate that FrameOracle reduces 16-frame inputs to an average of 10.4 frames without any loss in accuracy. When starting from 64-frame candidates, it reduces the input to an average of 13.9 frames while improving accuracy by 1.4\%, achieving state-of-the-art efficiency-accuracy trade-offs for scalable video understanding.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 22308
Loading