SynopGround: A Large-Scale Dataset for Multi-Paragraph Video Grounding from TV Dramas and Synopses

Published: 20 Jul 2024, Last Modified: 06 Aug 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Video grounding is a fundamental problem in multimodal content understanding, aiming to localize specific natural language queries in an untrimmed video. However, current video grounding datasets merely focus on simple events and are either limited to shorter videos or brief sentences, which hinders the model from evolving toward stronger multimodal understanding capabilities. To address these limitations, we present a large-scale video grounding dataset named SynopGround, in which more than 2800 hours of videos are sourced from popular TV dramas and are paired with accurately localized human-written synopses. Each paragraph in the synopsis serves as a language query and is manually annotated with precise temporal boundaries in the long video. These paragraph queries are tightly correlated to each other and contain a wealth of abstract expressions summarizing video storylines and specific descriptions portraying event details, which enables the model to learn multimodal perception on more intricate concepts over longer context dependencies. Based on the dataset, we further introduce a more complex setting of video grounding dubbed Multi-Paragraph Video Grounding (MPVG), which takes as input multiple paragraphs and a long video for grounding each paragraph query to its temporal interval. In addition, we propose a novel Local-Global Multimodal Reasoner (LGMR) to explicitly model the local-global structures of long-term multimodal inputs for MPVG. Our method provides an effective baseline solution to the multi-paragraph video grounding problem. Extensive experiments verify the proposed model's effectiveness as well as its superiority in long-term multi-paragraph video grounding over prior state-of-the-arts. Dataset and code are publicly available. Project page: https://synopground.github.io/.
Primary Subject Area: [Content] Vision and Language
Relevance To Conference: Video Grounding (VG) is a fundamental problem in vision-language understanding. In this work, we present a large-scale dataset for video grounding called SynopGround, which consists of over 2800 hours of long narrative videos with human-written synopses and manually annotated timestamps. It is the first video grounding dataset considering both long-form videos and long-text queries, and contains query descriptions conveying both low-level events as well as high-level plots for learning more complex and abstract concepts. Based on the dataset, we introduce a challenging Multi-Paragraph Video Grounding (MPVG) task to explore the long-term contextual video grounding with complex queries. Besides, we propose a novel Local-Global Multimodal Reasoner (LGMR) to explicitly model the local-global structures of long-term inputs and conduct iterative reasoning within and across the two levels of structures. Combining our proposed dataset, task and method, we believe this work makes significant contributions to the multimodal content understanding area.
Supplementary Material: zip
Submission Number: 272
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview