Size-Modulated Deformable Attention in Spatio-Temporal Video Grounding Pipelines

Published: 2024, Last Modified: 27 Jan 2025ICPR (18) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The integration of attention mechanisms into computer vision tasks, inspired by the success of Transformers in natural language processing, has revolutionized various applications such as object detection and visual grounding. In this paper, we focus on spatio-temporal video grounding (STVG), a computer vision task that aims to jointly extract spatial and temporal regions from videos based on textual descriptions. Leveraging recent advancements in attention-based Transformer architectures, particularly in object detectors, and building upon a recent baseline model, we integrate two enhancements in attention modules: Width-Height Modulation and Deformable Attention units. These enhancements aim to improve the accuracy and efficiency of STVG techniques in two datasets, HC-STVG and VidSTG, by addressing challenges related to feature inconsistencies and prediction reliability across video frames. As a result, our study contributes to advancing the baseline models in spatio-temporal video grounding, bridging the gap between computer vision and natural language processing domains.
Loading