Text Is MASS: Modeling as Stochastic Embedding for Text-Video Retrieval

Published: 01 Jan 2024, Last Modified: 19 Feb 2025CVPR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The increasing prevalence of video clips has sparked growing interest in text-video retrieval. Recent advances focus on establishing a joint embedding space for text and video, relying on consistent embedding representations to compute similarity. However, the text content in existing datasets is generally short and concise, making it hard to fully describe the redundant semantics of a video. Correspondingly, a single text embedding may be less expressive to capture the video embedding and empower the retrieval. In this study, we propose a new stochastic text modeling method T-MASS, i.e., text is modeled as a stochastic embedding, to enrich text embedding with a flexible and re-silient semantic range, yielding a text mass. To be specific, we introduce a similarity-aware radius module to adapt the scale of the text mass upon the given text-video pairs. Plus, we design and develop a support text regularization to further control the text mass during the training. The inference pipeline is also tailored to fully exploit the text mass for accurate retrieval. Empirical evidence suggests that T-MASS not only effectively attracts relevant text-video pairs while distancing irrelevant ones, but also enables the de-termination of precise text embeddings for relevant pairs. Our experimental results show a substantial improvement of T-MASS over baseline (3% ~ 6.3% by R@1). Also, T-MASS achieves state-of-the-art performance on five bench-mark datasets, including MSRVTT, LSMDC, DiDeMo, VA-TEX, and Charades. Code and models are available here.
Loading