Robust Tracking for Visual Complex EnvironmentsDownload PDF

Anonymous

27 Mar 2022 (modified: 05 May 2023)Submitted to GI 2022Readers: Everyone
Keywords: Complex environment, Visual tracking, Correlation filtering, Adaptive feature fusion
TL;DR: In this paper, we extract robust visual object features and adaptively fuse these features to enable robust tracking in complex environments.
Abstract: Achieving accurate tracking and robust tracking in visually complex scenes remains a challenging task. This requires to ensure that a robust appearance representation is obtained while improving the generalization ability of the model to cope with challenges such as object deformation, illumination changes, scale changes, and motion blur. In this paper, we propose a robust tracking technique in complex tracking scenarios based on efficient convolution operator (ECO) tracker. It adopts two-fold ideas: a) extract deep features using the Conformer network after expanding the number of underlying channels, and b) adaptively adjust the fusion weight of shallow features and deep features according to the peak to sidelobe ratio and the joint score of adjacent frame trajectory smoothness. By doing so, the generalization ability of the tracking model and the tracking adaptability of complex scenes are improved, and fully exploiting the complementary nature of deep and shallow features. Experimental results show that the algorithm in this paper can effectively cope with different challenges of target tracking in complex environments, robustly tracking the target while maintaining high accuracy.
Supplementary Material: zip
4 Replies

Loading