Weakly Supervised Gaussian Contrastive Grounding with Large Multimodal Models for Video Question Answering
Abstract: Video Question Answering (VideoQA) aims to answer natural language questions based on the information observed in videos. Despite the recent success of Large Multimodal Models (LMMs) in image-language understanding and reasoning, they deal with VideoQA insufficiently, by simply taking uniformly sampled frames as visual inputs, which ignores question-relevant visual clues. Moreover, there are no human annotations for question-critical timestamps in existing VideoQA datasets. In light of this, we propose a novel weakly supervised framework to enforce the LMMs to reason out the answers with question-critical moments as visual inputs. Specifically, we first fuse the question and answer pairs as event descriptions to find multiple keyframes as target moments and pseudo-labels, with the visual-language alignment capability of the CLIP models. With these pseudo-labeled keyframes as additionally weak supervision, we devise a lightweight Gaussian-based Contrastive Grounding (GCG) module. GCG learns multiple Gaussian functions to characterize the temporal structure of the video, and sample question-critical frames as positive moments to be the visual inputs of LMMs. Extensive experiments on several benchmarks verify the effectiveness of our framework, and we achieve substantial improvements compared to previous state-of-the-art methods.
Primary Subject Area: [Content] Vision and Language
Secondary Subject Area: [Generation] Multimedia Foundation Models
Relevance To Conference: This paper focuses on the theme of Vision and Language, specifically the realm of Video Question Answering (VideoQA) and Large Multimodal Models (LMMs), which are important research directions in multimodal processing. In detail, current LMMs often fail to utilize question-relevant visual cues in videos due to relying on uniformly sampled frames, and this paper proposes a novel weakly supervised framework to help LMMs focus on question-critical moments as visual inputs. The extensive experiments across several VideoQA benchmarks demonstrate significant improvements over prior state-of-the-art methods, showcasing the effectiveness of our framework in multimodal processing and reasoning.
Submission Number: 531
Loading