Abstract: In recent years, pretrained language-image models (PLIMs) have delivered advances in video captioning. However, existing PLIMs primarily focus on extracting global feature representations from still images and text sequences, while neglecting fine-grained semantic alignment and temporal variations between vision and text pairs. To this end, we propose a global-local alignment module and a temporal parsing module to reflect the detailed correspondence and temporal perception between the two modalities, respectively. In particular, the global-local alignment module enables cross-modal registration at two levels, i.e., the sentence-video level and the word-frame level, to obtain mixed-granularity semantic video features. The temporal parsing module is a dedicated self-attention structure that highlights temporal order cues along video frames, compensating for the limited temporal capacity of PLIMs. In addition, an adaptive two-stage gating structure is designed to leverage the linguistic predictions further. The linguistic information derived from the first stage prediction is dynamically routed through an adaptive decision gate, allowing for quality assessment of whether the information should proceed to the second stage. This structure can effectively reduce the computational burden for easy samples and further improve the accuracy of the prediction results. The experimental results obtained on several benchmark datasets demonstrate the effectiveness of the proposed solution, with improved performance compared to state-of-the-art methods.
External IDs:dblp:journals/tsmc/XuZSFW25
Loading