RTSformer: A Robust Toroidal Transformer With Spatiotemporal Features for Visual Tracking

Published: 01 Jan 2024, Last Modified: 08 Apr 2025IEEE Trans. Hum. Mach. Syst. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In complex environments, trackers are extremely susceptible to some interference factors, such as fast motions, occlusion, and scale changes, which result in poor tracking performance. The reason is that trackers cannot sufficiently utilize the target feature information in these cases. Therefore, it has become a particularly critical issue in the field of visual tracking to utilize the target feature information efficiently. In this article, a composite transformer involving spatiotemporal features is proposed to achieve robust visual tracking. Our method develops a novel toroidal transformer to fully integrate features while designing a template refresh mechanism to provide temporal features efficiently. Combined with the hybrid attention mechanism, the composite of temporal and spatial feature information is more conducive to mining feature associations between the template and search region than a single feature. To further correlate the global information, the proposed method adopts a closed-loop structure of the toroidal transformer formed by the cross-feature fusion head to integrate features. Moreover, the designed score head is used as a basis for judging whether the template is refreshed. Ultimately, the proposed tracker can achieve the tracking task only through a simple network framework, which especially simplifies the existing tracking architectures. Experiments show that the proposed tracker outperforms extensive state-of-the-art methods on seven benchmarks at a real-time speed of 56.5 fps.
Loading