Submission Type: Regular Long Paper
Submission Track: Speech and Multimodality
Submission Track 2: Linguistic Theories, Cognitive Modeling, and Psycholinguistics
Keywords: video understanding, large language models, persuasion strategies, zero-shot, long video understanding
TL;DR: Video understanding lags far behind NLP; LLMs excel in zero-shot. Our approach utilizes LLMs to verbalize videos, creating stories for zero-shot video understanding. This yields state-of-the-art results across five datasets, covering fifteen tasks.
Abstract: Multimedia content, such as advertisements and story videos, exhibit a rich blend of creativity and multiple modalities. They incorporate elements like text, visuals, audio, and storytelling techniques, employing devices like emotions, symbolism, and slogans to convey
meaning. There is a dearth of large annotated training datasets in the multimedia domain hindering the development of supervised learn-
ing models with satisfactory performance for real-world applications. On the other hand, the rise of large language models (LLMs) has
witnessed remarkable zero-shot performance in various natural language processing (NLP) tasks, such as emotion classification, question-
answering, and topic classification. To leverage such advanced techniques to bridge this performance gap in multimedia understanding, we
propose verbalizing long videos to generate their descriptions in natural language, followed by performing video-understanding tasks on
the generated story as opposed to the original video. Through extensive experiments on fifteen video-understanding tasks, we demonstrate that our method, despite being zero-shot, achieves significantly better results than supervised baselines for video understanding. Furthermore, to alleviate a lack of story understanding benchmarks, we publicly release the first dataset on a crucial task in computational social science on persuasion strategy identification.
Submission Number: 232
Loading