T2VIndexer: A Generative Video Indexer for Efficient Text-Video Retrieval

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Current text-video retrieval methods mainly rely on cross-modal matching between queries and videos to calculate their similarity scores, which are then sorted to obtain retrieval results. This method considers the matching between each candidate video and the query, but it incurs a significant time cost and will increase notably with the increase of candidates. Generative models are common in natural language processing and computer vision, and have been successfully applied in document retrieval, but their application in multimodal retrieval remains unexplored. To enhance retrieval efficiency, in this paper, we introduce a model-based video indexer named T2VIndexer, which is a sequence-to-sequence generative model directly generating video identifiers and retrieving candidate videos with constant time complexity. T2VIndexer aims to reduce retrieval time while maintaining high accuracy. To achieve this goal, we propose video identifier encoding and query-identifier augmentation approaches to represent videos as short sequences while preserving their semantic information. Our method consistently enhances the retrieval efficiency of current state-of-the-art models on four standard datasets. It enables baselines with only 30%-50% of the original retrieval time to achieve better retrieval performance on MSR-VTT (+1.0%), MSVD (+1.8%), ActivityNet (+1.5%), and DiDeMo (+0.2%). The code is available at https://anonymous.4open.science/r/T2VIndexer-40BE.
Primary Subject Area: [Content] Vision and Language
Secondary Subject Area: [Generation] Generative Multimedia, [Content] Vision and Language, [Content] Multimodal Fusion
Relevance To Conference: We use generative model to directly locate target videos, reducing the need for detailed matching and ranking, speeding up the retrieval process with high accuracy. And we introduce the Video Semantic Tree (Vi-SemTree), which represents video as a shorter sequence and retains the rich semantic information of the original video, aiding in the realization of video direct localization. Besides, our method demonstrates superior performance on four benchmark datasets, yielding an enhancement in retrieval efficiency by approximately 50%. Besides, this efficiency gain scales with the increase in the size of the candidates.
Submission Number: 1451
Loading