Stacked Multimodal Attention Network for Context-Aware Video Captioning

Published: 01 Jan 2022, Last Modified: 11 Apr 2025IEEE Trans. Circuits Syst. Video Technol. 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recent neural models for video captioning usually employ an attention-based encoder-decoder framework. However, current approaches mainly attend to the motion features and object features of the video when generating the caption, but ignore the potential but useful historical information. Besides, exposure bias and vanishing gradients problems always exist in current caption generation models. In this paper, we propose a novel video captioning framework, named Stacked Multimodal Attention Network (SMAN). It adopts additional visual and textual historical information during caption generation as context features, employs a stacked architecture to process different features gradually, and utilizes the Reinforcement Learning method and coarse-to-fine training strategy to further improve the generated results. Both quantitative and qualitative experiments on the benchmark datasets of MSVD and MSR-VTT show the effectiveness and feasibility of our framework. The codes are available on https://github.com/zhengyi123456/SMAN.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview