What Is That Talk About? A Video-to-Text Summarization Dataset for Scientific Presentations

ACL ARR 2025 February Submission1068 Authors

12 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Transforming recorded videos into concise and accurate textual summaries is a growing challenge in multimodal learning. This paper introduces VISTA, a dataset specifically designed for video-to-text summarization in scientific domains. VISTA contains 18,599 recorded AI conference presentations paired with their corresponding paper abstracts. We benchmark the performance of state-of-the-art large models and apply a plan-based framework to better capture the structured nature of abstracts. Both human and automated evaluations confirm that explicit planning enhances summary quality and factual consistency. However, a considerable gap remains between models and human performance, highlighting the challenges of scientific video summarization.
Paper Type: Long
Research Area: Summarization
Research Area Keywords: NLP datasets, multimodal summarization, abstractive summarisation,
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 1068
Loading