Spatio-temporal SiamFC: per-clip visual tracking with siamese non-local 3D convolutional networks and multi-template updating
Abstract: Recently, Siamese network based approaches show promising results on visual object tracking. These methods typically handle the tracking task by per-frame object detection and thus fail to fully exploit the rich temporal contexts among successive frames, which are important for accurate and robust object tracking. To benefit from the temporal information, in this paper, we investigate a per-clip tracking scheme in the Siamese-based approach and present a novel spatio-temporal SiamFC method for high-performance visual tracking. More specifically, we incorporate a non-local 3D fully convolutional network into a Siamese framework, which allows the model to act directly on the inputs of multiple templates and search video clips and to extract features from both spatial and temporal dimensions, thereby capturing the temporal information encoded in multiple video frames. We then propose a multi-template matching module to learn a representative tracking model using spatio-temporal template features and propagate informative target cues from the template set to the search clip using attention, which facilitate the object searching in clips. During inference, we employ a confident search region cropping and a dynamic multi-template update mechanism for stable and robust per-clip tracking. Experiments on six benchmark datasets show that our spatio-temporal SiamFC achieves competitive performance compared to state-of-the-art while running at approximatively 60 FPS on GPU. Codes are available at https://github.com/liangminstu/STSiamFC.
Loading