Language Repository for Long Video Understanding

Published: 28 Oct 2024, Last Modified: 14 Jan 2025Video-Langauge Models PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Short Paper Track (up to 3 pages)
Keywords: long video reasoning; large language models
TL;DR: We propose a concise, interpretable language representation that helps long video reasoning, while utilizing the context of LLMs efficiently and effectively.
Abstract: Language has become a prominent modality in computer vision with the rise of multi-modal LLMs. Despite supporting long context-lengths, their effectiveness in handling long-term information gradually declines with input length. This becomes critical, especially in applications such as long-form video understanding. In this paper, we introduce a Language Repository (LangRepo) for LLMs, that maintains concise and structured information as an interpretable (i.e., all-textual) representation. It consists of write and read operations that focus on pruning redundancies in text, and extracting information at various temporal scales. The proposed framework is evaluated on zero-shot video VQA benchmarks, showing state-of-the-art performance at its scale. Our code is available at https://github.com/kkahatapitiya/LangRepo.
Supplementary Material: pdf
Submission Number: 35
Loading