Convolutional Method for Modeling Video Temporal Context Effectively in Transformer

Published: 01 Jan 2023, Last Modified: 13 Nov 2024SAC 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Video understanding remains a challenging task because video understanding models have many parameters to be trained and should capture detailed spatiotemporal contexts in video effectively. Recent methods have typically employed 3D convolution modules or else self-attention modules. However, we identify that when the self-attention mechanism captures temporal semantics, it often struggles to find out proper temporal context for video understanding. In this paper, we propose a new method for enhancing temporal modeling by incorporating 3D convolution modules into attention-based model, transformer. In particular, we replace the temporal attention of the TimeSformer with a 3D convolution module to improve temporal context learning. In contrast to the TimeSformer, our proposed method can focus on modeling temporal details at the low-level encoders, while gradually getting to focus on temporal contexts more globally at the high-level encoders. Our method surpasses the TimeSformer by 2.2% margin on Something-Something v2, which is required complex temporal modeling for getting high performance.
Loading