Temporal Context Aggregation for Video Retrieval with Contrastive Learning
Abstract: The current research focus on Content-Based Video Retrieval requires higher-level video representation describing the long-range semantic dependencies of relevant incidents, events, etc. However, existing methods commonly
process the frames of a video as individual images or short
clips, making the modeling of long-range semantic dependencies difficult. In this paper, we propose TCA (Temporal Context Aggregation for Video Retrieval), a video
representation learning framework that incorporates longrange temporal information between frame-level features
using the self-attention mechanism. To train it on video retrieval datasets, we propose a supervised contrastive learning method that performs automatic hard negative mining and utilizes the memory bank mechanism to increase
the capacity of negative samples. Extensive experiments
are conducted on multiple video retrieval tasks, such as
CC WEB VIDEO, FIVR-200K, and EVVE. The proposed
method shows a significant performance advantage (∼ 17%
mAP on FIVR-200K) over state-of-the-art methods with
video-level features, and deliver competitive results with
22x faster inference time comparing with frame-level features.
0 Replies
Loading