Abstract: In recent years, the field of unmanned aerial vehicle (UAV) tracking has grown rapidly, finding numerous applications across various industries. While the discriminative correlation filters (DCF)-based trackers remain the most efficient and widely used in the UAV tracking, recently lightweight convolutional neural network (CNN)-based trackers using filter pruning have also demonstrated impressive efficiency and precision. However, the performance of these lightweight CNN-based trackers is still far from satisfactory. In the generic visual tracking, emerging vision transformer (ViT)-based trackers have shown great success by using cross-attention instead of correlation operation, enabling more effective capturing of relationships between the target and the search image. But to best of the authors’ knowledge, the UAV tracking community has not yet well explored the potential of ViTs for more effective and efficient template-search coupling for UAV tracking. In this article, we propose an efficient ViT-based tracking framework for real-time UAV tracking. Our framework integrates feature learning and template-search coupling into an efficient one-stream ViT to avoid an extra heavy relation modeling module. However, we observe that it tends to weaken the target information through transformer blocks due to the significantly more background tokens. To address this problem, we propose to maximize the mutual information (MI) between the template image and its feature representation produced by the ViT. The proposed method is dubbed TATrack. In addition, to further enhance efficiency, we introduce a novel MI maximization-based knowledge distillation, which strikes a better trade-off between accuracy and efficiency. Exhaustive experiments on five benchmarks show that the proposed tracker achieves state-of-the-art performance in UAV tracking. Code is released at: https://github.com/xyyang317/TATrack .
Loading